Robots and artificial intelligence combine to realize the robots of your imagination

Robots and artificial intelligence combine to realize the robots of your imagination

The robots that people imagine are the kind that appear in science fiction movies, can help humans do various things, and even have emotions. Today, actual robots are active in various production and life scenes, appearing in the form of mechanical arms or other forms, and they are used to handle various professional situations. For example, many families have used sweeping robots, even though people don't think they are real "robots." In order to make robots truly intelligent, artificial intelligence technology is being combined with robotics technology. Of course, there are still many challenges.

Written by Zhao Shan and Lin Zeling

In recent years, with the rise of new technologies such as artificial intelligence, the Internet of Things, driverless driving, and intelligent transportation, robots have gradually begun to enter people's daily lives in various forms, and various household robots and service robots have emerged one after another. Household sweeping robots were the first to enter thousands of households because of their moderate prices. Household sweeping robots have a certain degree of intelligence and can automatically complete cleaning tasks such as vacuuming and mopping in the room. The fully intelligent cooking and food delivery service completed by robots in the media restaurant at the 2022 Winter Olympics was a big hit. Emotional robots are a new type that have emerged in recent years. Algorithmic technology is used to give robots "human emotions", so that they have the ability to express, recognize and understand joy, sorrow, anger, and imitate, extend and expand human emotions, and can accompany children and the elderly. Famous ones include Sony's Aibo robot dog and SoftBank Group's Pepper robot.

Modern robots are complex systems integrated with various high-tech subsystems, generally including processors, sensors, controllers, actuators, and various functional kits (Effector) generally installed at the end of the robot arm (Arm). Robot systems are complex and have interdisciplinary technical characteristics. They mainly include software and hardware, and basically cover major technical engineering categories such as mechanics, electronics, control, and manufacturing. Recently, robotics technology has extended to the field of artificial intelligence, becoming able to communicate with humans more naturally, move more flexibly, and have more and more diverse functions, and even combined with new fields such as biotechnology and neuroscience.

In the past 10 years, five technologies in the field of robotics have been selected as one of the “Top 10 Breakthrough Technologies in the World” by MIT Technology Review.

The Baxter Blue-Collar Robot developed by Rethink Robotics, also known as a collaborative robot in academic circles, is safe, cheap, easy to program and interactive. It can work with people on the manufacturing assembly line to complete tasks, and is a good helper for humans. Its appearance also means that many bottlenecks in the development of traditional industrial robot technology have been broken.

To ensure the safety of workers, early collaborative robots had no internal power source, and the power was generally provided by human workers. Its function is to allow computer-controlled movement by redirecting or turning the payload in a cooperative manner with the worker. The evolved collaborative robots provide limited power and add multiple sensors to monitor the status of the robot and the cooperating personnel to ensure the safety of the personnel. Although there are still many challenges to achieve the goals of excellent versatility, human-machine friendliness, and affordable prices at this stage, collaborative robots strive to transform the early service relationship between humans and robots into a partnership, opening a new chapter in robot research. These studies have also evolved from the simple superposition of application functions at the beginning to the pursuit of changes in working relationships and structures. Compared with humans or robots working alone, the teamwork of humans and robots can greatly improve work efficiency.

The robot can walk relatively quickly over uneven and unfamiliar terrain. Photo by Webb Chappell

Taking Baxter as an example, the symbol of collaborative robot technology is a flexible robotic arm with multiple sensors such as cameras, sonar, force feedback, collision detection, etc., which makes the interaction between people and robots safer. Through the operator's "hands-on" demonstration teaching, the threshold of task programming is lowered, so that robots can adapt to new tasks faster and easier, which is very suitable for small-scale production and shortening product production cycles of small and medium-sized enterprises. They are also small in size and can usually be placed next to the workbench to help practitioners complete highly repetitive tasks such as picking, placing, packaging, gluing, welding, etc. Finally, compared with traditional industrial robots, collaborative robots are also cheaper.

Collaborative robots represent the latest development trend of robotics technology and the evolution of the relationship between humans and robots, from tools to real assistants. The collaborative robot market has also been extremely promising in recent years. According to data from the International Federation of Robotics (IFR), global industrial robot sales in 2016 were 294,000 units, and the global industrial robot stock was 1.828 million units. Berkeley Capital predicts that the global collaborative robot market will grow from US$116 million in 2015 to US$11.5 billion in 2025. It will mainly be used in item picking, packaging, assembly of parts on assembly lines, material preparation, and operation of other machines. It is expected to be widely used in small and medium-sized manufacturing, medicine, electronic parts and other fields.

The collaborative robot market is in a period of rapid growth, and the market size will far exceed the above estimate within 10 years. This is because collaborative robots can be used not only in the industrial field, but also in the non-industrial field, or the commercial field, where even if consumer-grade robotic arms with practical value are not realistic in the short term, sales in the non-industrial field will see huge growth in the near future.

Logistics warehousing and medical care are two fields that are currently being researched and commercialized. In the field of warehousing and logistics, there are currently two main solutions for picking. One is "goods to people", represented by Amazon's Kiva robot and the smart warehouse technology of Ocado in the UK; the other is to use mobile robots and robotic arms to replace workers to complete the sorting of fixed shelves, which is also the main content of Amazon's robot sorting challenge (Amazon Picking Challenge). Some teams have used FANUC's LRMate200 series lightweight robots with 3D vision systems to do shelf sorting. E-commerce and smart logistics warehousing are both very promising markets. Another is medical rehabilitation robots and prosthetic robots. Since collaborative robots are relatively safe and robotic arms can imitate the flexible characteristics of human arms, they are very suitable for use in these occasions. In addition, robots cooking, serving as tour guides, and serving as catering waiters are all very promising application directions, providing us with more possibilities for robots to enter the lives of ordinary people.

However, problems have also been encountered in the development of collaborative robot technology - different hardware needs to be programmed independently, and the time-consuming and labor-intensive research and development leads to high costs. Industrial robots are mainly used in manufacturing and production. They perform their respective duties on the assembly line and can accurately complete tasks at specific workstations. According to this model of robot development, independent hardware must be developed for different robots, and corresponding control software must be used to give specific and precise instructions to complete specific tasks. For example, the specific implementation method of a multi-degree-of-freedom humanoid hand robot with multiple joints at the end to pick up a cup is definitely very different from that of a pliers robot arm with only two "fingers" at the end to pick up the same cup.

If different robots can share their learned skills, it can greatly reduce duplication of development work and quickly promote the application of robots. The new technology of knowledge sharing between robots is one of the major technological breakthroughs achieved to solve this problem, which can make different skills or techniques popular among robots more quickly.

Since the technology of sharing skills between robots was proposed, this technology has always been a hot topic in robotics, and has produced many extended technologies, and there are also many new integration developments with fields such as artificial intelligence. In 2016, Sergey Levine was named one of the "35 Technological Innovators Under 35" by MIT Technology Review. After resigning from his university teaching position, he joined Google to continue his research and published a paper in the same year titled "Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection".

Sergey Levin found that by using six robotic arms to practice grasping different objects over a long period of time and sharing the parameters of the neural network that controls hand-eye coordination during the grasping process, the size of the training database was maximized, and the efficiency of training and debugging the neural network was improved. The highlight of this extended technology is the combination of deep learning artificial intelligence and robot hardware control, which will be a very potential hot technology in the field of robotics in the future.

In May 2017, Professor Julie Shah of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology released the CLEARN technology. This new technology combines traditional robot demonstration teaching and motion planning programming technology. By providing the robot with basic data information on how to grasp a series of typical objects, and then only through a single demonstration teaching, the robot can automatically learn the skills of grasping a series of different objects. More importantly, these skills can also be automatically converted into the skills of other robots, and other robots are not required to have the same movement method and mechanical structure as the original robot.

To use CLEARN technology, users first provide the robot with a knowledge base of information about how to grasp various objects with different constraints. For example, tires and steering wheels have similar shapes, but to attach them to a car, the robot must configure its robotic arm and the tool kit at the end in different ways to better move them. The operator then uses a 3D interface to show the robot a demonstration of completing a specific task, which contains a series of related moments called "keyframes." By matching these keyframes with different situations in the knowledge base, the robot can automatically provide a motion route plan for the operator to edit as needed. Through this technology, the Optimus dual-arm military bomb disposal robot successfully taught the learned skills, including opening doors, moving objects, etc., to another 6-foot (about 1.8m) tall, 400-pound (about 181.4kg) humanoid robot Atlas.

CLEARN technology effectively solves the problems of traditional robot demonstration teaching, which is inefficient, time-consuming, and requires independent programming development, making it easier and faster for people to teach robots new skills. It is conceivable that when this type of technology that allows robots to learn new skills more quickly is applied to the collaborative robots mentioned above, the functions of robots will increase rapidly, quickly adapt to more tasks, and be applied to more fields.

With the support of hydraulic power and multiple sensors, Boston Dynamics' Bigdog robot can remain stable on difficult terrain. Image courtesy of Boston Dynamics.

The collaborative robots that currently occupy an important position in the market do not have a human appearance, but are more like robotic arms. Their main purpose is to reduce the burden of human work and to perform repetitive work under harsh environmental conditions. Humanoid full-body robots based on human beings are a cutting-edge field in robot research and a major goal of robotics and artificial intelligence. Agile robots that can walk on their feet represent a major breakthrough in robot mobility technology, allowing robots to finally break free from the limitations of terrain and environment and go to places where humans can go.

The leader of this technology is Boston Dynamics. The bipedal and quadrupedal robots developed by Boston Dynamics have excellent balance and dexterity. They can walk on rough and complex terrain and go to places that most wheeled robots in the world cannot go. To achieve the goal of walking, the robot needs dynamic balance at every step and needs to have a strong adaptability to instantaneous instability. This includes the need to quickly adjust the landing point of the foot, calculate how much force is needed to suddenly turn, and more importantly, apply a very large and precise force to the foot in a very short time to control the overall posture of the robot. This requires extremely high "black technology" in multiple dimensions such as control theory, system integration, and engineering implementation.

Boston Dynamics' BigDog robot is walking. Image courtesy of Boston Dynamics.

The latest version of Boston Dynamics, Atlas, can be used outdoors and inside buildings and is designed specifically for mobile applications. It is powered by a mains and hydraulically driven, using sensors on the body and legs to balance the lidar and stereo sensors on the head to avoid obstacles, assess terrain, and help navigate and manipulate objects. In the latest video released by Boston Dynamics in 2021, Atlas is smaller and more flexible than before, 1.75m tall and weighing 82kg. Atlas demonstrated amazing "parkour" abilities, running fast on narrow balance beams, jumping over obstacles, and somersaulting from high places. These outstanding performances are due to Boston Dynamics' world-leading control theory, system design, and engineering capabilities. An important difference between Atlas and other companies' robots is the use of hydraulic systems for motion control, which ensures instantaneous greater control power output and more precise force transmission. The Atlas robot also benefits from the "bionic" integrated structure design concept. Bionic robots, like real people, not only have support structures and cylinders like bones and joints, but also oil circuits and circuits like blood vessels and nerves.

What’s most striking is that, in addition to its dexterity, Atlas is more human-like than when it was originally released in 2016. In past demonstrations, it was basically blind—it needed the environment to be fixed in order to make successful movements. But in the current video, it really relies more on its own perception to navigate, adjusting its movements based on the situation it sees. This means that it relies less on pre-set programming than before, and engineers don’t have to pre-program jumping movements for all situations that the robot may encounter.

Another problem with the popularity of robots in the past was their low flexibility. While robots perform well in controlled environments, they fail in uncontrolled environments. For example, robots can easily perform operations in factories and warehouses that humans cannot easily do, such as accurately cutting equipment to millimeter-scale sizes, but cannot simply open a door like a human without extensive training. But as Atlas shows, robot flexibility has made great progress with the help of artificial intelligence. One of the key technologies that roboticists use to improve robot flexibility is reinforcement learning. Reinforcement learning allows robots to learn over time to use different techniques to handle objects and choose the best technique. The robot can then be used to perform all possible tasks in any conditions and improve its flexibility.

With the improvement of the flexibility of robotics technology, robots will be more widely used and play an important role in tasks related to the military, waste disposal, logistics and delivery, transportation, etc. It is believed that it will not be long before the robots in science fiction movies will move from the big screen to real life.

Academic Reviews

Intelligent robots reconstruct future productivity

Written by Xu Huazhe (Assistant Professor, Institute of Interdisciplinary Information Sciences, Tsinghua University)

Whether it is a humanoid robot holding a tray and politely handing you a cup of coffee, or a steel behemoth with a gleam in its eyes trying to destroy humanity, humans always have endless imagination about robots. "Robot" is an old yet novel word: as early as 1921, Czech playwrights called the mechanical humans on the assembly line in their scripts "robots"; as early as 1941, the word "robotics" was first mentioned in the novel "Runaround" published by science fiction writer Asimov. From science fiction to science, robotics has gone through a long development process. Today, scientists are gradually allowing these "iron guys" to manipulate objects with "arms", "run" like dogs, and even "walk on two feet" like humans. In 2022, when artificial intelligence is gradually maturing, robotics research and related industries are also beginning to rejuvenate.

Artificial intelligence, especially deep learning technology, is nothing new to many people: face payment in mobile phones, filters in selfies, and recommendation systems for online ads all rely on deep learning, that is, learning patterns from data and even generating data. From the research perspective of artificial intelligence scientists, now that we have cameras as "eyes" and voice processing technology as "mouths", the next step is naturally how to install intelligent "hands" and "feet". For researchers in robotics, how to install a "brain" on those electronic mechanical devices that can already complete running and jumping control has also become a recent work hotspot.

Therefore, the integration of artificial intelligence and robots has become an inevitable trend: artificial intelligence robots can not only complete designated actions like traditional robots, but also combine perception and changes in the environment, generalize through models, and thus achieve general purposes. Such a "strong combination" breeds the most futuristic imagination space: robots can complete a series of complex tasks in unstructured space - the space where humans really live, relying only on sensor information. For example, can you imagine that during the Spring Festival, a table of New Year's Eve dinner is made for you by a robot?

Of course, today's artificial intelligence robots are still a long way from the silicon-based creatures we imagine to have a considerable level of intelligence. Even so, humans have never stopped pursuing smarter and more powerful robots. In 2019, "Dexterity" was selected as one of the "Top Ten Breakthrough Technologies in the World" by MIT Technology Review. The relevant paper mentioned the "Dactyl" project, a robot research project that caused a sensation at the time. Researchers at OpenAI used deep reinforcement learning to allow the robot hand to autonomously learn the strategy of twisting the Rubik's Cube in a large amount of randomized simulator simulation data, and applied the strategy to real mechanical dexterous hands. The reason why this project is so influential is that "reinforcement learning" allows robots to learn how to complete tasks without clear human instructions, which is an indicator of higher intelligence; second, because the migration from the simulation environment to the real robot hand has been achieved, allowing us to see a clear and feasible path from perfecting simulations, improving algorithms, to real deployment.

Coincidentally, robotics experts from ETH Zurich and Intel used a similar method to train robot dogs in a simulation environment through deep reinforcement learning. The strategies obtained from the training were eventually used on the ANYmal robot dog, allowing the robot dog to walk on diverse, complex, and even unprecedented surfaces. Previously, this problem often required robotic scientists and engineers to perform a lot of manual optimization and integration for different terrains. The main reason for achieving this world-renowned result is that the artificial intelligence robot has already seen a variety of more complex and rugged roads in the simulation environment, so it can be applied to reality with ease.

The combination of robots and artificial intelligence is, of course, far more than the two examples mentioned above. Google scientists let the robot (TossingBot) complete the throwing of objects by moving its arms at high speed; researchers at the University of California, San Diego tried to let the robot (DexMV) learn the movements of human hands from videos; researchers at Stanford University and MIT (the team I am in) tried to let the robot (RoboCraft) operate flexible objects and even make dumplings. Today's artificial intelligence algorithms help robots complete one multi-step, irregular task after another that only humans could complete before. Robots are no longer just mechanical arms that can only perform a single specified action on the assembly line. This not only blurs the boundaries between artificial intelligence and robots, but also further liberates productivity and frees humans from high-risk, repetitive labor.

Of course, there are still many challenges to create robots that are intelligent enough. At the algorithm level, a series of technologies based on deep learning all rely on the fitting ability of neural networks, and researchers and creators with some experience in neural networks have experienced the "unreliability" of neural networks: it is extremely difficult for neural networks to achieve 100% accuracy. In application scenarios such as smart unlocking, if the neural network "makes a mistake", it may only cause the user to be unable to unlock the phone and need to try multiple times, but in robot applications, it is very likely to threaten people's lives and property. At the same time, it is also very difficult to make robots deal with extreme cases that have never been seen, because if the robot has not experienced such scenarios in the training data set or simulator, it will often make wrong judgments in the real world. At the hardware level, high-precision, large-load robots are often expensive and fragile. How to effectively reduce the cost of robot hardware and make it accessible to thousands of households is also an important issue facing the majority of robot researchers and entrepreneurs.

In addition, with the development of artificial intelligence robots, robot ethics has gradually entered people's field of vision. As early as in Asimov's science fiction novels, the "Three Laws of Robotics" were proposed: "First, robots may not injure or put humans in danger; second, robots must obey orders given to them except where such orders would conflict with the first law; third, robots may maintain their own existence unless such orders conflict with the first or second law." We can sense that people always have various concerns about robots. Although it is still far from the moment of robot "awakening", people should still think about many ethical issues. For example, when robots talk to humans, will they use wrong personal pronouns because of some inherent impressions? Will a large number of robots take away some human jobs? Every technological explosion will be accompanied by corresponding social and ethical issues, which are also what we must think about and solve on the road of integrating technology with human life.

What kind of technology can we obtain? What kind of robots can we create? How will human life change after having these robots? Humans are exploring unknown frontiers with their curiosity and pursuing the imagined future step by step. In China, we have seen sweeping robots at home, service robots in restaurants, autonomous driving everywhere (also seen as transportation wheeled robots), and general-purpose robotic arms in factories. In the foreseeable future, these robots will be equipped with smarter "brains" and more suitable "bodies" to complete more difficult tasks. Under human research and reasonable constraints, robots will make people's lives easier and more comfortable!

This article is authorized to be excerpted from "Top of Science and Technology: Global Breakthrough Technological Innovation and Future Trends" (People's Posts and Telecommunications Press, January 2023)

Special Tips

1. Go to the "Featured Column" at the bottom of the menu of the "Fanpu" WeChat public account to read a series of popular science articles on different topics.

2. Fanpu provides a function to search articles by month. Follow the official account and reply with the four-digit year + month, such as "1903", to get the article index for March 2019, and so on.

Copyright statement: Personal forwarding is welcome. Any form of media or organization is not allowed to reprint or excerpt without authorization. For reprint authorization, please contact the backstage of the "Fanpu" WeChat public account.

<<:  Can Venus Flytraps catch flies? What misunderstandings do you have about carnivorous plants?

>>:  It's loquat season again. Have you tried them yet? Let's take a look at a scientific mystery caused by loquat...

Recommend

Beware! The sandals you wear may be "toxic"!

At present, there are many brands of high-heeled ...

Want to save advertising costs? Don’t waste money? Learn to find the right one

As Internet advertising is booming and competitio...

Why is it said that new energy vehicles will definitely be led by China?

The world is entering the era of electrification,...

iPhone 6: More stylish than stylish

At 1 a.m. today, Apple held a new product launch ...

Mouse: In order to make a living, what kind of environment have I not lived in?

I am Dong Dong Meow Talking animals are so fun! T...

The Future of Smart TVs from the Fight Between Xiaomi and LeTV

The war between Xiaomi and LeTV has been going on...

How much does it cost to be an agent for a fruit mini program in Xingtai?

What is the price of being an agent of Xingtai Fr...

CCS Insight: Mobile phone shipments in 2020 will be the lowest in a decade

2020 is going to be tough for phone makers, with ...

Xiaona is gone, Microsoft Xiaobing is back, are voice assistants really useful?

After Windows 10 2004 separated Cortana and the s...

How to reduce product user churn rate?

Reviewing this project, I gained a lot. The most ...