in

Robotics Q&A with UC Berkeley’s Ken Goldberg


For the next few weeks, TechCrunch’s robotics newsletter Actuator will be running Q&As with some of the top minds in robotics. Subscribe here for future updates.

Part 1: CMU’s Matthew Johnson-Roberson
Part 2: Toyota Research Institute’s Max Bajracharya and Russ Tedrake
Part 3: Meta’s Dhruv Batra
Part 4: Boston Dynamics’ Aaron Saunders

Ken Goldberg is a professor and the William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley, a co-founder and chief scientist at robotics parcel sorting startup Ambidextrous and a fellow at IEEE. 

What role(s) will generative AI play in the future of robotics?

Although the rumblings started a bit earlier, 2023 will be remembered as the year when generative AI transformed Robotics. Large language models like ChatGPT can allow robots and humans to communicate in natural language. Words evolved over time to represent useful concepts from “chair” to “chocolate” to “charisma.” Roboticists also discovered that large Vision-Language-Action models can be trained to facilitate robot perception and to control the motions of robot arms and legs. Training requires vast amounts of data so labs around the world are now collaborating to share data. Results are pouring in and although there are still open questions about generalization, the impact will be profound.

Another exciting topic is “Multi-Modal models” in two senses of multi-modal:

  • Multi-Modal in combining different input modes, e.g. Vision and Language. This is now being extended to include Tactile and Depth sensing, and Robot Actions.
  • Multi-Modal in terms of allowing different actions in response to the same input state. This is surprisingly common in robotics; for example there are many ways to grasp an object. Standard deep models will “average” these grasp actions which can produce very poor grasps.  One very exciting way to preserve multi-modal actions is Diffusion Policies, developed by Shuran Song, now at Stanford.

What are your thoughts on the humanoid form factor?

I’ve always been skeptical about humanoids and legged robots, as they can be overly sensational and inefficient, but I’m reconsidering after seeing the latest humanoids and quadrupeds from Boston Dynamics, Agility and Unitree. Tesla has the engineering skills to develop low-cost motors and gearing systems at scale. Legged robots have many advantages over wheels in homes and factories to traverse steps, debris and rugs. Bimanual (two-armed) robots are essential for many tasks, but I still believe that simple grippers will continue to be more reliable and cost-effective than five-fingered robot hands.

Following manufacturing and warehouses, what is the next major category for robotics?

After the recent union wage settlements, I think we’ll see many more robots in manufacturing and warehouses than we have today. Recent progress in self-driving taxis has been impressive, especially in San Francisco where driving conditions are more complex than Phoenix. But I’m not convinced that they can be cost-effective. For robot-assisted surgery, researchers are exploring “Augmented Dexterity” — where robots can enhance surgical skills by performing low-level subtasks such as suturing.

How far out are true general-purpose robots?

I don’t expect to see true AGI and general-purpose robots in the near future. Not a single roboticist I know worries about robots stealing jobs or becoming our overlords.

Will home robots (beyond vacuums) take off in the next decade?

I predict that within the next decade we will have affordable home robots that can declutter — pick up things like clothes, toys and trash from the floor and place them into appropriate bins. Like today’s vacuum cleaners, these robots will occasionally make mistakes, but the benefits for parents and senior citizens will outweigh the risks.

 What important robotics story/trend isn’t getting enough coverage?

Robot motion planning. This is one of the oldest subjects in robotics — how to control the motor joints to move the robot tool and avoid obstacles. Many think this problem has been solved, but it hasn’t.  

Robot “singularities” are a fundamental problem for all robot arms; they are very different from Kurzweil’s hypothetical point in time when AI surpasses humans. Robot singularities are points in space where a robot stops unexpectedly and must be manually reset by a human operator. Singularities arise from the math needed to convert desired straight-line motion of the gripper into the corresponding motions for each of the six robot joint motors. At certain points in space, this conversion becomes unstable (similar to a divide-by-zero error), and the robot needs to be reset.

For repetitive robot motions, singularities can be avoided by tedious manual fine-tuning of repetitive robot motions to adjust them such that they never encounter singularities. Once such motions are determined, they are repeated over and over again. But for the growing generation of applications where robot motions are not repetitive, including palletizing, bin-picking, order fulfillment and package sorting, singularities are common. They are a well-known and fundamental problem as they disrupt robot operations at unpredictable times (often several times per hour). I co-founded a new startup, Jacobi Robotics, that implements efficient algorithms that are *guaranteed* to avoid singularities. This can significantly increase reliability and productivity for all robots.  

Channel 1 Says It'll Use AI-Generated News Anchors

Channel 1 Says It’ll Use AI-Generated News Anchors

Interview with Sam Altman two days before he was fired | Ep 58