Jump to main content

Projects

Third-party funded projects of the Chair Media Computing

The following is a list of third-party funded projects that are and have been supported by the public sector. We also carry out contract research for industry.

Funding:

Period: 09/2021 - 08/2025

Organisation: Bundesministerium für Bildung und Forschung (BMBF)

Financial Support: Gesamtvolumen: ~10 Mio Euro, Eigenanteil: ca. 800.000€

Description:

A digital ecosystem to strengthen medical research, diagnostics and therapy in Saxony - The Digital Hub for Progress MiHUBx aims to link a wide variety of players and initiatives in health research and care to create a sustainable and flexible system that is both future-proof and capable of growth. MIHUBx - Use Case 1 "Diabetology meets Oncology": The groups of Jun.-Prof. Kowerko (TUC, IT project management) and Prof. Ritter (University of Applied Sciences Mittweida) will jointly create interfaces between medical partners (e.g. Saxon Macular Center) and add them to a central data integration center infrastructure in an aggregated way, making them (more) usable for research purposes. Exact requirements regarding core data to be collected and basic (algorithmic) functionality will be collected together with medical partners (ophthalmologists, diabetologists, general practitioners) and will flow into a mockup, which will be iteratively and participatively developed into a demonstrator. Acceptance and benefit of the human-technology interfaces are systematically surveyed, with a focus on physicians in dealing with AI modules. The data sets previously predefined by the medical partners will be integrated. Prof. Ritter's group will focus on the analysis and processing of images and volumes from OCTs and text extracts for treatment analysis in correlation to visual acuity under different time spans, while Media Computing will focus on medical texts of electronic patient records and to provide them structured according to generally accepted (international) terminology/ontology like SNOMEDCT. The demonstrator will include basic annotation, data exploration and treatment visualization capabilities, as well as therapy support algorithms, specifically for disease progression under different treatment scenarios, for diabetes and for diabetic eye disease. Usability and user experience are continuously evaluated. Prof. Dr. med. Katrin Engelmann (Chief Physician of the Eye Clinic of the Chemnitz Hospital) and Prof. Dr. med. Peter Schwarz (University Hospital Dresden, focus on prevention) are the medical directors.

Projektpartner:

  • Klinikum Chemnitz gGmbH, Augenklinik, Prof. Dr. med. Katrin Engelmann
  • Klinikum Chemnitz gGmbH, Abteilung Informatik/Datenintegrationszentrum, Dr. Frank Nüßler, Martin Bartos
  • TU Dresden, Zentrum für medizinische Informatik, Prof. Dr. Martin Sedlmayr
  • TU Dresden, Med. Klinik III (Prävention und Diabetes), Prof. Dr. med. Peter Schwarz
  • Hochschule Mittweida, Professur Medieninformatik, Prof. Dr. Marc Ritter
  • Hochschule Mittweida, Professur für Digitale Transformation und Angewandte, Prof. Dr. Christian Roschke

Press:

Publications:

Funding:

Period: 07/2023 - 06/2027

Organisation: Europäischer Sozialfond Plus (ESF Plus) im Freistaat Sachsen / SAB

Financial Support: 81.600€

Description:

The main objective of visual inspection systems is to detect defects throughout the manufacturing process or at the end of the respective sub-process steps. The availability of annotated data is generally limited because in the industrial context it is proprietary data and for process engineers the annotation of defects is very labor intensive and requires expert knowledge. While errors are generally rare due to optimized processes, they can cause significant disruption to the production process when they do occur. For this work, we use high-resolution images from the inspection system of our regional partners such as 3D-Micromac AG. These images are taken at the end of a so-called dicing process step, in which wafers are cut with a laser-based method and then separated into hundreds to thousands of chips. Preliminary work shows that defect detection with neural networks is very powerful. However, if only one out of 100 chips is faulty, the positive prediction value is only about 50%, i.e., half of the detected faults are actually not faults (false positives), which may still be too high for practical use of such a system in an industrial context; accordingly, classification processes need to be further refined. Additionally, the AI models do not generalize well enough for new, still unknown wafer image material, this must be counteracted with data synthesis methods.

Project partners:

  • 3D-Micromac AG, Dr. Michael Grimm

Förderung:

Period: 10/2018 - 03/2021

Organisation: Sächsisches Staatsministerium für Wissenschaft, Kultur und Tourismus (SMWK)

Financial Support: Gesamtvolumen: ~100.000€

Description:

The aim was to test the hypothesis that reduced cell viability at the back of the eye (fundus) can be spectrally visualized at an early stage. For this purpose, a commercial fundus camera was modified so that spectrally filtered fundus images could be obtained in approximately 100 patients. The filtering is adapted to the spectral range of absorption of the molecule cytochrome C, which is associated with cell death. Medical findings placed on the fundus images could be better classified with AI-based image analysis using the additional color-filtered fundus images on standard color fundus images.

Project partners:

  • Klinikum Chemnitz gGmbH, Augenklinik, Prof. Dr. med. Katrin Engelmann

Funding:

Period: 06/202 - 11/2022

Organisations: Deutscher Akademischer Austauschdienst (DAAD), CAPES

Program: "Co-funded Research Grants – Short-Term Grants"

Financial Support: DAAD: 5.350 €, CAPES: ~2500€

Description:

The automation of processes previously conducted by humans is becoming even more common, e.g. in the field of visual defect inspection. However, visual inspection is still a complex task due to the lack of public data, class imbalance, and specific geometry of defects. The aim is to evaluate how to combine and optimize Deep Learning-based models to detect domain-specific and multi-size defects exploring public and own datasets from different scenarios (e.g. power line and semiconductor wafer inspection).

Project partner:

  • Voxar Labs at Informatics Center of UFPE Cidade Universitária (Campus Recife)

Publications:

Funding:

Period:  10/2018 - 06/2021

Organisations: Novartis Pharma GmbH, GWT GmbH

Financial Support: total: ~180.000€, Media Computing: 110.000€

Description:

OphthalVis2.0 aimed at transferring data obtained by text mining into a structured database for analyses of visual acuity progression of patients with age-related macular degeneration, diabetic macular edema and retinal vein occlusion under intravitreal surgical drug application therapy (IVOM). Together with the Eye Clinic of the Klinikum Chemnitz gGmbH, almost 50,000 patient data sets were structured and allowed systematic statistical surveys of the visual acuity course, but also first machine modeling of the visual acuity course under IVOM therapy.

Project partners:

  • Klinikum Chemnitz gGmbH, Augenklinik, Prof. Dr. med. Katrin Engelmann

Funding:

Period: 10/2018 - 09/2021

Organisation: Europäischer Sozialfond (ESF) / SMWK

Financial Support: 54.000€

Description:

Inspired by the human visual perception system, hexagonal image processing in the context of machine learning is concerned with the development of image processing systems that combine the advantages of evolutionary-based structures following their biologically motivated counterparts. While classical image processing systems almost exclusively utilize square lattice format based methods as of the current state of the art of input and output devices, their hexagonal counterparts offer a number of decisive advantages which can benefit researchers and users alike. As a first application-oriented approach, this project deals with the synthesis of the for this purpose designed framework, called Hexnet, the processing steps of the hexagonal image transformation, and dependent (machine learning) methods, which are applied with a focus on robust object classification in clinical as well as industrial problem areas.

Project partners:

  • Klinikum Chemnitz gGmbH, Augenklinik, Prof. Dr. med. Katrin Engelmann
  • Intenta GmbH, Dr. Basel Fardi, Jan Schloßhauer
  • 3D-Micromac AG, Dr. Michael Grimm
Funding:

Period: 09/2014 - 02/2016

Organisation: Novartis GmbH

Financial Support: ~200.000 €, Media Computing: 100.000 €

Description:

The aim was to show how software tools can support ophthalmologists, e.g. by visualization of ophthalmic data or diagnostic support in case of retinal damage by optical coherence tomography. Here, an Android app was developed for demo purposes that quantifies and displays the degree of damage to the retinal pigment epithelium using image processing. Furthermore, a prototype was developed that presents OCTs from different manufacturers in a unified application and transfers data into a uniform format.

Project partners:

  • Ophthalmologists from Freiburg and Chemnitz
  • Jun.-Prof. Dr. Paul Rosenthal, Juniorprofessur Visual Computing

Publications:

  • Rosenthal, Paul, Marc Ritter, Danny Kowerko, und Christian Heine. „OphthalVis - Making Data Analytics of Optical Coherence Tomography Reproducible“. In EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3), herausgegeben von Kai Lawonn, Mario Hlawitschka, und Paul Rosenthal, 1–5. Groningen: The Eurographics Association, 2016. https://doi.org/10.2312/eurorv3.20161109.

Funding:

Period: 08/2014 - 07/2019

Organisations:  Bundesministerium für Bildung und Forschung (BMBF)

Financial Support: 2.5 Mio Euro

Description

The InnoProfile transfer initiative "localizeIT - Localization of visual media" is based on the results of the junior research group sachsMedia (see section "completed projects", Chair of Media Informatics). The InnoProfile initiative sachsMedia dealt intensively with image analysis, especially of video material. The goal of the analysis was the discovery of persons and objects in video material. The use case was material from the film industry, as it is produced in local television. The goal of localizeIT is to solve the problems and needs of further use cases that have emerged in discussions and cooperations with companies in the Chemnitz region over the past years. These can be summarized in the topic of the localization issue of (audio and) visual media. In this context, the project deals with the localization question from three points of view: Localization of the medium: Where did a photograph originate? Localization in the medium: Where are which objects placed in an image? Localization in the world: Where are the objects of the images positioned in the real world? The project was conceived as a companion project to the endowed junior professorship Media Computing together with the professorship Media Informatics. The use cases included person recognition and behavior analysis in space based on audio, image and video recordings. It was also investigated to what extent image and video processing can contribute to the characterization of melting zones during laser welding, but also to the characterization of the resulting product after welding, e.g. the quality of a laser-guided welding process in the separation of chips in the semiconductor industry.

Project partners:

Donating Companies:

  • Intenta GmbH, Dr. Basel Fardi, Jan Schloßhauer
  • 3D-Micromac AG, Dr. Michael Grimm
  • 3D Insight GmbH, Dr. Stephan Rusdorf
  • IBS Software und Research GmbH

Publications:

  • Kowerko, Danny, Robert Manthey, René Erler, Thomas Kronfeld, Tobias Schlosser, Frederik Beuth, Tom Kretzschmar, u. a. „Schlussbericht zum InnoProfile-Transfer Begleitprojekt localizeIT“. Schlussbericht. Chemnitzer Informatik-Berichte. Chemnitz: TU Chemnitz, 29. Januar 2019. https://www.tu-chemnitz.de/informatik/service/ib/pdf/CSR-20-02.pdf.