Just in the last few years, advances in machine vision have helped many robots and autonomous vehicles achieve almost human-like levels of perception. Using a myriad of optic sensors, like high-resolution cameras, these robots and cars, in their own way, are finally able to see.
What Is Machine Vision?
What Is Machine Vision?
Machine vision is the technology that enables robots and other machines like autonomous vehicles to see and recognize objects in their surrounding environment. By pairing optic sensors with artificial intelligence and machine learning tools that can analyze and process image data, robots and autonomous vehicles equipped with machine vision systems are able to perform more complex tasks, like pulling orders in a warehouse or navigating downtown traffic.
As prices of cameras have fallen, compute power has increased and algorithms have matured, machine vision has helped robotics emerge from what Tom Hummel, vice president of technology at Rapid Robotics, described as a statically programmed state. Thanks to recent advances in deep learning, which allows robots to actually analyze what they’re seeing, robots are able to complete tasks that once seemed impossible or too cost prohibitive, like picking specific items out of a bin, Hummel said.
How Does Machine Vision Work?
Most machine vision systems require a light source, either mounted directly on the robot or set up within the facility where it operates, so the camera (or cameras), used can clearly capture objects, humans, potential hazards and other features within its surroundings.
Once the robot captures images, that visual data is sent to a processor or onboard computer that analyzes the images using artificial intelligence and machine learning algorithms, often along with data collected from other sensing modalities like LiDAR, radar and microphones, too.
After the images and other data are processed, that information is communicated back to the robot or other machines working alongside it. From there, the machines can make appropriate decisions, whether that’s stopping at a crosswalk or picking the right item for an order, which improves efficiency and safety.
Types of Machine Vision
There are three main types of machine vision:
One-dimensional vision: 1D vision doesn’t analyze the image of an entire object at once, but reads it one line at a time, often using a line-scan camera. This type of machine vision is typically used in inspection processes to spot defects in products moving on a conveyor.
Two-dimensional vision: 2D vision uses a digital camera to collect image data, which is then processed by comparing contrast variations from one image to another. This type of machine vision is often used to track objects, as well as verification and inspection.
Three-dimensional vision: 3D vision uses multiple digital cameras and other sensors in various locations to capture a digital model of an object, which provides an accurate assessment of its location, size and features. This type of machine vision is typically used to help robots navigate their surroundings as well perform tasks related to order fulfillment like picking products from bins and containers.
Machine Vision Applications
As most humans rely heavily on their sense of sight to work and interact with the world, robots, too, are doing much the same thanks to machine vision. Robot arms use it to inspect parts and products coming off assembly lines, determining which ones meet quality standards. And self-driving taxis use vision systems to help read subtle cues from pedestrians right before they cross a street, which is something human drivers appear to be getting much worse at. In warehouses, autonomous mobile robots use machine vision to help fulfill orders.
Machine Vision Applications
- Used in assembly lines to inspect parts to improve quality control.
- Helps robots working in warehouses locate products and navigate their surroundings.
- Allows self-driving cars to perceive the world around them and identify potential hazards.
The Manufacturing Industry Uses Machine Vision for Quality Control
When humans inspect parts coming off an assembly line, they don’t typically scrutinize each one looking for defects. It would take too much time, too much money and require a level of focus humans just don’t have, Hummel told Built In.
But a robot arm equipped with a machine vision system, like Rapid Robotics’ Rapid Machine Operator, can inspect each one. “The camera doesn’t sleep. The camera doesn’t care. And the camera’s fast,” Hummel said. “So you can check every part that comes out of that injection molder, or comes out of any process, and decide to discard it.”
As a result, factories are less likely to ship out bad parts and operators have a better understanding of the quality of the products they’re producing, according to Hummel.
Machine Vision Helps Self-Driving Taxis Understand Their Surroundings
Just before a pedestrian crosses a street, they often hint at what they’re going to do right before their foot even steps into the crosswalk. They may look up from their phone or look to their left or raise their hands ever so slightly. Though subtle, those signals can be observed or perceived by autonomous vehicles.
“Those cues are pretty critical when you think about the interactions between the pedestrian and the driver and the vehicle,” RJ He, director of perception at Zoox, a robotaxi company owned by Amazon, told Built In.
Recognizing those cues is one of the strengths of machine vision and where the technology shines, according to He.
“The magic happens when you are very thoughtful about how you use the individual sensing modalities and how they complement each other.”
Using artificial intelligence and machine learning algorithms alongside cameras and other sensing modalities like LiDAR, radar and thermal cameras that measure and collect location, speed and other important roadway data, Zoox’s electric self-driving cars are able to see and interact with the world around them. The cars, which are being tested in the San Francisco Bay Area, Las Vegas and Seattle, can even predict what nearby cars, trucks, cyclists and pedestrians may or may not do, like a person stepping into a roadway.
“The magic happens when you are very thoughtful about how you use the individual sensing modalities and how they complement each other,” He told Built In. “Both in terms of things like field of view and coverage, but more importantly, the algorithmic aspect of things.”
By leveraging individual strengths of machine vision and all these different sensor modalities and algorithms, Zoox is able to create an accurate representation of the world their cars can respond to in real time, without harming or inconveniencing their passengers or the other people and living things around them.
In a country like the United States, where roadways have become even more dangerous in recent years, that’s become incredibly important, and one of the motivations behind what Zoox is doing.
“With human drivers — with distractions and all that — we need to ensure that we can react to all of these bad behaviors,” He said.
Machine Vision Helps Robots Fulfill Orders
While many robotics companies working in the logistics industry rely on LiDAR to help their robots navigate warehouses and fulfill orders, inVia Robotics, a robotics company specializing in warehouse automation, instead relies on machine vision.
According to inVia’s CEO and co-founder, Lior Elazary, the company’s robots use machine vision to operate inside warehouses, pulling products for e-commerce orders by scanning AprilTags, which Elazary describes as QR codes affixed to containers. A robot analyzes the code it captures, which allows it to understand what it’s looking at.
The machine vision systems are also trained inside the warehouse where it collects and captures surrounding features that are analyzed using algorithms to essentially form what Elazary referred to as the “hypothesis” of where the robot is located within the warehouse as well as the container or bin it needs to pull from. It uses visual servoing, which controls the robots motion and allows it to grab an object much like a human would.
“Ultimately, we go and grab things with our eyes — you see where it is, and you hone in,” Elazary said. “And that’s what our robots do.”
Essentially, machine vision lets robots adapt and not let little things like a bowed shelf or misplaced box trip it up — minor hiccups for humans that typically cause problems for robots that rely on LiDAR and can lead to greater expenses, like replacing the shelves in a warehouse.
“It’s much harder to do,” Elazary said, referring to machine vision systems, “but it’s a lot more cost effective.”
Machine Vision vs. Computer Vision
While the terms machine vision and computer vision are often used interchangeably, the main differences are:
- Machine vision systems require cameras to capture and provide image data, whereas computer vision systems can be fed images from other sources — say the internet.
- Computer vision systems are where the AI and machine learning algorithms live that process that visual data.
Essentially, machine vision is the eye, while computer vision is the brain.
Machine Vision Companies
While enabling autonomous vehicles and other robots to see, machine vision systems are most often used in applications like inspection and quality assurance. Here are a few companies developing machine vision technology for use in manufacturing, logistics and more.
3D Infotech is an industrial automation company specializing in universal metrology, essentially a precision measurement application, for the aerospace, automotive and electronics industries. The company’s In-Sight machine vision software system, which uses cameras from Cognex, employs deep learning algorithms for industrial applications in factories like inspection and assembly verification of parts and kits.
Cognex develops machine vision sensors and 2D and 3D machine vision systems that are used to inspect and identify parts in industrial settings. The company’s In-Sight 3D-L4000 machine vision system pairs 3D laser displacement technology with a smart camera to capture better images for inspection applications.
The U.K.-based company, Industrial Vision, develops machine vision sensors and systems for use in the pharmaceutical and automotive industries, among others. In the aerospace industry, the company’s machine vision cameras inspect heads-up displays, a transparent screen in fighter jets that shows important data, ensuring screws and bolts are in place, as well as performing pass and fail inspections.
Stemmer Imaging’s machine vision systems are used in applications ranging from logistics to agriculture. The company is providing components to develop a machine vision system to help protect endangered birds from wind turbines — essentially turning the turbine off when one approaches. Stemmer is also enabling cobots to work in the automotive industry with 3D machine vision systems, which help them pick and place parts and materials onto conveyor belts and pallets.
VAIA Technologies recently developed a robotic knife sharpening system for a knife manufacturer that utilizes 3D machine vision for inspection, enabling the system to sharpen knives in less than 30 seconds and at a rate of 3,000 per day. The company’s other machine vision inspection systems are used for label verification and ensuring tamper-evident caps are properly applied in packaging applications.