Google’s Artificial Intelligence
Researchers at Google Brain announced in May 2017 that they had created AutoML, an Artificial Intelligence that can generate its own Artificial Intelligence (AI).
More recently, they decided to present AutoML with the biggest task it had ever faced, and AI capable of creating AI has spawned a ‘child’ that surpasses all AI made by humans.
Researchers at Google have automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a neural network supervisor who develops a child AI network for a specific task.
The task of this special child AI, which the researchers called NASNet, was to recognize objects in a video in real time (i.e. objects such as people, cars, traffic lights, bags, backpacks).
AutoML would evaluate the efficiency of NASNet and use this knowledge to develop its own child AI. He would also repeat this process thousands of times.
NASNet outperformed all other computer vision systems when tested on ImageNet image classification and COCO object detection datasets. (The Google researchers call these datasets “two of the most respected large-scale academic datasets in computer vision capability.”)
NASNet predicted images in ImageNet’s validation sequence with 82.7% accuracy, according to the researchers. This rate is 1.2 percent better than the previously published results. In addition, the system is 4 percent more efficient and has an Average Accuracy (mAP) rate of 43.1 percent.
Additionally, a less-accounted version of NASNet for mobile platforms outperformed top models of a similar size by 3.1 percent.
Machine learning provides many AI systems with the ability to perform specific tasks. While the idea behind this is pretty simple (an algorithm learns by sending lots of data to it), the process takes a huge amount of time and effort.
An AI producing AI takes on the hard part by automating the process of creating seamless and efficient AI systems. Ultimately, this means that AutoML can open up the field of machine learning and AI to non-experts.
As for NASNet in particular, accurate and efficient computer vision algorithms are in high demand because of the amount of possible applications. Using these, versatile robots powered by artificial intelligence can be created or, as one researcher said, helping people with visual impairments.
They could also help researchers develop self-driving vehicle technologies. The faster an autonomous vehicle can recognize objects in its path, the faster it can react, thereby increasing the safety of such vehicles.
Researchers at Google know that NASNet can be useful in a wide range of applications. Besides, they open sourced AI for image classification and object recognition inference.
In their blog post, they write: “As the machine learning community gets bigger, we’ll be able to build on these models and address a multitude of computer vision problems we haven’t even imagined yet.”
While the application areas of NASNet and AutoML are plentiful, the creation of an AI that can generate AI raises some concerns. For example, what will be done to prevent the parent from passing on unwanted mistakes to their child?
But what if AutoML builds structures too quickly for society to keep up? It’s not hard to see how NASNet could be used in automated surveillance structures in the near future; perhaps even earlier than regulations that could be put in place to control such structures.
Fortunately, world leaders are working quickly to ensure that such structures do not lead to a dystopian kind of future.
Amazon, Facebook, Apple and a few others are members of the AI Partnership for the Public and Community focused on the responsible development of AI.
The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical benchmarks for AI and DeepMind. DeepMind is a research firm owned by Google’s parent company Alphabet, and has also recently announced it has created a group focused on the ethical implications of AI.
Various governments are also working on regulations to prevent AI from being used for dangerous purposes, such as autonomous weapons. Thus, as long as humans control the overall direction of AI development, the benefits of creating an AI capable of generating AI will far outweigh the potential dangers.