What is reality? Some doctors say if you perceive it to be real your body will fill in the blanks to make it a reality. By now we have all seen this, says Joseph Zulick, a writer and manager at MRO Electric and Supply.
Unfortunately, it’s not completely true and makes many assumptions, there is some science behind this. Our brains interpret what it “expects” to see.
Virtual reality (VR) is similar in some ways, we are accepting that this virtual world has some omitted items but, it doesn’t take away from the experience. My Dad used to get sick at IMAX movies because it was immersive and gave you the impression you were flying or driving, etc. To him, perception was reality!
Virtual reality can give us the same interaction with the Internet of Things (IoT). Experience! When we experience things, we react. Our pupils dilate, our muscles contract, our breathing and heart rate changes. Based on these feedback mechanisms we can read back a person’s sensations to a virtual experience.
Using sensors, we can tell how someone is reacting. This is expanding on how the VR/AR (augmented reality) experience is improving. The newest version is not a display only, it also tracks our eye movement and the reaction to what we see. This helps programmers and Artificial Intelligence (AI) systems to recognise that their representation on the screen is confusing or misleading.
If the user is looking to the right but the direction should be taking them to the left, we know the graphics need to be corrected. We are using the IoT feedback through the self-facing camera to tell us the individual’s reaction. If the instructions become confusing, we can sense through expressions and pupil reactions that the user is becoming frustrated. We can also position them using GPS and the forward-facing camera, where the frustration is occurring.
Self-evolving programs
The programs are self-evolving now to determine that what is being transmitted via the cloud to the user is not producing the desired response. If a technician is using AR goggles to walk them through a repair using tags on a piece of equipment along with global positioning, then we can tell if the AR tags are in the right locations or if they are getting lost.
The feedback from the sensors tell repetitive actions because they are walking through the steps numerous times. Assembly and warehouse systems are improving their efficiency and accuracy using tools like Google Glass being integrated into supply chain systems that work to improve the bottom line by reducing mistakes and speeding up the process.
Some of the systems such as pick-by-vision are looking to show metrics that benefit the company in the two main factors, accuracy and speed. The systems are of course dependent on the inventory management systems and the warehouse location system into which you will integrate.
The pick-by-vision systems are using augmented reality which allows the actual vision of the individual, will be the base for which all the other information systems will overlay. It overlays mapping information, directions, photos of the item to be picked, and bar codes to be compared. The great part of the systems is the can be real-time and improved through AI.
Trade-off in quality
Much of the success and failure of these systems comes through the quality of the vision system and the cameras. Of course, the higher the quality, the more data and slower the response time can be unless your complete system is upgraded to utilise the data from the camera. This is also why many of the systems opt for cheaper GPS technology and scanning tags. These can allow for lower resolution, but the trade-off is the finite analysis the higher quality vision systems will offer.
The vision systems give you the advantage of recognising and comparing location based on visual cues. The simpler versions use tags similar to the QR tags. These scale the distance based on a known location and size of the tag. This can be confusing though as the angular distortion and change of position can take away some of the accuracy.
Determining the picking order or assembly order is where most of these tools really excel and make these tools more than a glorified laptop, tablet, or smartphone; Hi-tech programs that take information from IoT and then use the positioning data to determine the optimised flow really capitalises on the tools system but next they can use AI to learn the best flow sequence.
The key for much of these systems is that you need the feedback loop from the IoT, sensors, or tags. This must be communicated elsewhere to compare and feedback with information.
Warehouses and factories
In warehouse systems the keys mainly ensure you pick the right part, but how long did it take? In assembly systems, these AR and VR tools can integrate to produce an obstruction that isn’t there but is anticipated to be installed. This way you can run analysis checks and ergonomic assessments prior to installing the equipment that will be causing the obstruction.
These tools can really benefit from training. VR creates an environment similar to the one the trainee will be working on without investing millions in equipment and in training machines. This can greatly reduce the learning curve.
These digital twins allow you to train but more importantly, create a duplicate environment to simulate conditions without the cost or the engineering levels. Many companies are producing these VR simulations prior to final assembly and testing. This allows the final systems to be tweaked before a hard tool is produced that can take weeks or months and cost thousands of dollars.
The author is Joseph Zulick,writer and manager at MRO Electric and Supply.
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow
Leave a Reply