Jump to main content
Workshop
Block I: Automotive

 

 

 

 

Block I: Automotive

In the first block, which began on November 4th , 2019 , the students dealt intensively with the technologies of automotive software engineering . Technical director is Julkar Nine, M.Sc .

Tasks within BLOCK I:

  • Pedestrian detection using the Haar Cascade Classifier
  • Track detection and curvature estimation using image processing
  • Traffic sign recognition using deep learning techniques
  • Generation of facts from sensory data using PiCan 2 modules
  • Generation of facts from image processing algorithms

  • All in all, the tasks of the project are closely related to each other and work towards the big topic of " situation awareness ", in that the ability to make decisions should be promoted through the perception and absorption of information from the environment. On the basis of the perceived information from tasks 1-4, legible information could be created, on the basis of which decisions were initiated (see task 4).

    The processing of tasks one to three included dealing with different techniques of image processing, machine and deep learning . The purpose of processing the tasks was to generate information that could be perceived from the external environment. The target architectures of the tasks were the Raspberry Pi 3b + models, whereby these were already processed in the demonstration object " CE-Box " by computer engineers of the TU Chemnitz.

    By completing the first task, the students were able to develop a more accurate and better algorithm for pedestrian detection than the OpenCV program could provide. The students who dealt with task 2 were also able to achieve success. They programmed an algorithm that can estimate the degree of curvature of the road to the left or right by adapting the slope of the lane. In the third task, RCNN methods were used to perceive and recognize traffic signs (16 pieces) and aim for high accuracy, which is why this result requires further optimization.

    After completion and testing of the algorithms, the results were passed on to the students who dealt with the fifth task. The fourth task also dealt with perceptions, but it referred to internal information of the vehicle. Therefore, the aim was to generate synthetic sensory data and to pass the perceived information on to the students on the next task. A GUI ( Graphical User Interface) was used as module sensors to provide synthetic data such as speed, engine fuel, etc. The students created such a GUI to which the synthetic sensors could be added or removed according to the respective purpose. The data was then sent to the task group five via CAN messages forwarded, which generated the relevant facts and sent them back to group four. Here, the constantly rotating cycle was completed by changing the synthetic sensory values ​​based on previously defined rules.