The first of two webinars planned together with the IHK Region Stuttgart for the project "SensAR" on Wednesday, March 24, 2021, dealt with the topics of object recognition and visualization. A total of 10 professors and staff members explained the current status of the developed tools and answered user-related questions and suggestions.
The SensAR project is funded by the Carl Zeiss Foundation with € 750,000 in the Transfer program line. The exchange with users and transfer of the results is therefore very important in this project. It had been planned and hoped for a presence event in which the laboratories, the measuring chambers and AR applications can be demonstrated live to interested SMEs. After half of the project duration, there is definitely already a lot to show!
Now the format had to be adapted to an online webinar. This did not dampen the response from SMEs: 30 registrations showed the interest and the need for augmented reality solutions in the area of production for SMEs.
We would like to provide easy access for SMEs to the technology field of augmented reality.
At the beginning of the one-hour webinar, Prof.-Ing. Volker Coors, as overall project manager, explained the idea behind the SensAR project. The abbreviation stands for "Mediation of location- and context-related sensory data by means of augmented reality". The approach focuses on generalizable processes that occur in many companies without intruding into specific production processes. The goal is to achieve relief through automated recording and digitized assistance systems. Smaller and medium-sized companies should thus be given easier access to the technology field of augmented reality.
The technology market has developed strongly, especially in the areas of wireless sensors, communication technologies and sensor networks, and this trend can also be observed in the Internet of Things (IoT) and Industry 4.0. However, a holistic approach that unites research areas such as dynamic object recognition, location, sensing and standards, UI, and privacy and security has been lacking. In this first of two webinars, object recognition and user interface visualization were discussed with participants* in two workshops. After a short pitch to the entire group, each participant was able to decide for themselves which of the two breakout workshops to attend – the choice was pleasantly balanced.
Example applications on real systems and objects best show what is already possible.
Using example applications, previous work on object recognition was presented in the first breakout session. The precise detection of objects in factory halls, such as the almost ubiquitous load carriers, met with lively interest. Behind this simple-sounding task is a highly accurate terrestrial laser scanning that generates a 3-dimensional point cloud. This is semantically segmented by a neural network using an AI application – i.e. a label is assigned to each point. Within these semantically segmented point clouds, object recognition is then performed and an exact positioning can be determined and displayed. In a second approach, not laser scanning is used as the data origin, but photogrammetry, i.e. image recordings from which point clouds can also be automatically determined in order to recognize mobile objects such as load carriers.
Semantically segmented areas in images can be used to visualize objects but also measured values with a local reference in AR applications. This was demonstrated using an RFID measurement chamber and a smartphone in video mode.
In the second breakout session, the HFT's own RFID measurement chamber was used to demonstrate which AR applications were developed in the SensAR project. Via movies, the participants were taken into the labs and explained which data overlays via mobile Android devices or also the HoloLens2 have been implemented so far. Indoor navigation was also included in the use case. The question of added value for the users was always explicitly in the foreground: How much content should be displayed, is the visualization intuitive, how good is the interaction with the virtual and real objects?
The participants actively contributed their ideas and suggestions. The previous application examples could thus be supplemented with new ones, so that the ones that have been thought of so far can be even more closely aligned with actual needs.
We are looking forward to the second workshop on April 14, where the topics of sensor technology, localization and data security will be presented.