When it comes to the field of advanced robotics, machine vision is an indispensable component in guiding robots to take commands and react by shifting from Point A to Point B.
“Robotic arms today are just machines with six motors, where you take a joystick, ask them to go to a particular position, record this position into another position, and record this position again,” said Gokul NA, co-founder of CynLr (Cybernetics Laboratories), a deep-tech robotics and cybernetics startup based in Bangalore, India. “The assumption is that it will keep repeating this action again and again and again within 20-micron precision.”
Robotic arms are programmed to move within the parameters of preset positions and designed to operate with high precision. These robots lack adaptability and struggle when the objects they handle shift even slightly, leading to grasping and manipulation failures.
This is the crux of its limitation, said NA, as it underscores a universal challenge when manufacturers use advanced robotics in assemblies.
The Core of the Machine Vision Problem: Manipulating Unrecognized Objects
A significant part of manufacturing tasks involves basic repetitive actions, such as moving parts or assembling items. These tasks are largely manual despite high labor costs, argued NA.
“If my numbers are right, the U.S. pays around $1.3 trillion in wages in manual labor alone for manufacturing sector,” NA said. “That’s a lot of un-automatable tasks.”
This deficiency is compounded by the fact that robots are limited to basic tasks, such as feeding parts into a machine, or moving parts from one location to another. “This is their primary task,” NA pointed out.
Current vision systems make use of color images and pattern recognition to identify objects and construct depth, explained NA, but they falter when it comes to reflective or obscured items. This reflects that solutions are merely based on a basic understanding of vision, and it is why vision systems today do not scale, he said. If the robot cannot identify an object, it falters and won’t know where to go.
Cameras are the preferred way to make the robotic system dynamic, NA said. The bottleneck for cameras is that they must identify objects at every point yet cannot adjust dynamically to changing situations.
CynLr responded to the innovation opportunity by developing a visual object intelligence platform that interfaces with robotic arms. The solution instructs robotic arms to pick up unrecognized objects without recalibrating hardware. It also works with mirror-finished objects, an ongoing challenge for robotic vision systems.
Hot Swapping: Platform Enables Standardized Production for Different Outputs
CynLr looked to human vision for guidance in solving the problem of coordinating vision for robot gripper manipulation, said NA. Humans intuitively use vision in intricate ways that entail layers of processing and contextual understanding. We instinctively process cues—such as depth perception, motion, autofocus and convergence—and effortlessly use the information to navigate or manipulate objects.
“More than 55% of your brain, at any given point of time, has to process visual data or visual information, or any of your processing to do allocating for,” explained NA, adding that CynLr has interrogated those processing layers and that better solutions can be uncovered when exploring the extent to which computation is oversimplified in current machine vision systems. “That’s what we are actually building—a sentience before intelligence,” NA said.
CynLr develops functionality that will allow the robotic arm to handle unknown positions for objects in its view—to handle it, pick it, rotate and explore it and to bring it to a point that is familiar to the system. In essence, NA said that the platform enables product-agnostic assembly lines that can produce different outputs with little additional capital cost.
Showtime for Agnostic Vision-Guided Robotic Manipulators
CynLr is currently proving out its machine vision stack with deployments at Denso and General Motors. CynLr also showcased its general-purpose, semi-humanoid visual manipulation robot platform at the Robotics Summit & Expo in Boston show in May. Known as CyRO, it is billed as a dual-arm vision-guided robotic manipulator that can intuitively grasp objects it’s never seen before and switches between two tasks on a movable station.
Operative since 2019, the startup’s product has shown early promise and drawn interest from potential multinational customers. To date, the company has raised $5.2 million in funding over two rounds.
Watch additional parts of this interview series with Gokul NA: