Machine Learning Recognition & Implications For Our AI Velociraptor And Us

Credit to Author: Michael Barnard| Date: Sun, 03 Nov 2019 18:10:24 +0000

Published on November 3rd, 2019 | by Michael Barnard

November 3rd, 2019 by  

Originally published on .

Plastic Dinosaur approached the irregularly shaped object, a platform with four stems reaching to the floor with a projection upward in the rear. He was skittish as he approached, not knowing what it was, his amygdalanet sending out warnings while curiousnet sent out its desire to approach. It didn’t react or move or make noise, so he sniffed it, nudged it and then walked away to charge.

Plastic Dinosaur approached the irregularly shaped object, a platform with four stems reaching to the floor with a projection upward on the right. He was skittish as he approached, not knowing what it was, his amygdalanet sending out warnings while curiousnet sent out its desire to approach. It didn’t react or move or make noise, so he sniffed it, nudged it and then walked away to charge.

Plastic Dinosaur approached the irregularly shaped object, a vertical plane with four stems reaching out to the side with a projection of more solid material away from the plane along the ground. He was skittish as he approached, not knowing what it was, his amygdalanet sending out warnings while curiousnet sent out its desire to approach. It didn’t react or move or make noise, so he sniffed it, nudged it and then walked away to charge.

Plastic Dinosaur approached the irregularly shaped object, a platform at an angle with four stems shooting up at an angle and projection down to the ground. He wasn’t skittish as he approached, knowing that this was the same object he had seen in other positions. He sniffed it, and moved on.

Plastic Dinosaur had learned to identify a specific chair, regardless of what direction he approached it from or whether it was upright or not. But would that work with a different chair?


Anamorphic artwork of

This is an article in the series David Clement, co-founder of Senbionic, and I are collaborating on regarding the state of the art of neural networks and machine learning using a fictional robotic velociraptor as a fun foil. It has rubber teeth and claws, so don’t worry about the shrieking. The first article dealt with , the second its and the third with and how they can be used to train a neural network. The fourth dealt with due to the limitations of machine learning and sample sizes. The fifth dealt with an interesting situation where it’s instantiated curiosity and model for learning . The sixth dealt with salience, leading PD to pay attention human hands, hence light switches, hence learning to . And it can see in the dark. More shrieking.

This article deals with isomorphism, which is to say things that are equivalent or the same. Mathematically, it’s expressed in a variety of ways which equate to having the same things in the set, even if they are in a different order. Over a series of interactions, PD learns to recognize a chair from any angle as being the same object. This is remarkably difficult, as the embedded video makes more clear. If you haven’t watched it, go back and do so again.

The video is clearly the same set of wires in the same places in space, yet as viewer’s perspective changes, so does the image it represents. This is a more complex version of the same problem of knowing the identity of a thing. From different angles and in different conditions, everything looks different, yet the thing remains the thing. Unless it’s a river, as no on can step in the same river twice, something Heraclitus pointed out 2,500 years ago, and probably not for the first time. Yet it remains a river, and identifiable as such.

How does a neural net identify with any degree of certainty that a thing is the same thing from any angle? Or that it’s a thing of a certain category? How do we? If you saw a chair from the front, back or side, or lying on the floor, you wouldn’t think “What is that?” You’d automatically identify it as a chair, and perhaps put it somewhere more useful, right it, or sit in it. If you had seen the huge, metal chair sculpture next to Mies van der Rohe’s TD Centre in downtown Toronto before it was relocated, you wouldn’t say, “That’s a weird object,” you’d say “That’s a really big metal chair.”

Imagine trying to write a computer algorithm which could identify a chair, regardless of its orientation, color, material, or size. Imagine explaining to someone how you glance at something from any angle and quickly identify it. It’s non-trivial. Yet we do it instantly in a wide range of lighting conditions for a wide range of objects against a wide variety of backdrops. We can glance at two sentences and say to ourselves, “Those mean the same thing,” even though they use different words. “The reddish-brown, dog-like animal leapt over the barrier” is the same as “The russet fox jumped over the fence.” It’s trivial for us to equate them, but it’s not trivial to get a computer to do the same.

A few years ago, David and I explored this idea for ourselves in a specific way. We wanted to increase the ability of a person to open a junk drawer and find the tiny object, tangled with other tiny objects, amidst the clutter. Our process of discovery led us to backlighting. We designed a set of trays with LEDs in the base. When a drawer was opened, light shone up from underneath a set of squared resin bowls. The combination of the light and backlighting allowed our gooey neural nets to identify objects much more crisply among a set of ‘junk’.

The oldest paper I found on the subject was from 1954, , by Alonzo Church of Princeton University. That led me to the interesting additional question. We know that a thing is a specific thing, but how do we know that a specific thing is part of a set of the same things? Plastic Dinosaur can see that the chair is the same chair regardless of orientation. But does that give PD the ability to see a variety of chairs as chairs? An intensional [no sic, the “s” is intentional] definition is a statement that a thing with a set of characteristics is a thing. A chair is an object upon which you sit which has leg(s) and a back. Extensionally, chairs include dining room table chairs, lounge chairs, folding chairs, wooden chairs, wobbly chairs, and swiveling chairs. Intensional vs extensional. Would PD understand that a chair of a different material was ‘the same’ as the chair it had learned was a chair?

Amazingly, today, mostly yes. One of the apps David has on his phone is an instantiation of the RetinaNet neural network. The other day in a coffee shop as we were chatting before talking to the CTO of Canada’s about the opportunities presented for the organization, BC, and Canada by the exploitable state of machine learning technology today, David pointed his phone’s camera at a nearby table and chairs and it labeled them as such. He pointed it at his rounded latte cup and it labelled it ‘bowl’, which is remarkably close. The state of the art of exploitable neural networks that run on hardware people have in their pocket is high quality three-dimensional image identification and labeling from any angle.

Plastic Dinosaur would probably be built with RetinaNet internalized. After all, if teams of people have done the heavy lifting to be able to say that an object is a specific type of object, why wouldn’t you build it into a neural-net driven velociraptor?

The most recent piece on isomorphism I found was from a tiny handful of months ago, a as ‘the same’. A graph is not necessarily a visual curve through space, but is in this context a tree or network. A tree is a directed graph, which is to say a hierarchical structure with a single stem, roots, and leaves, in which any object can only be connected up toward the stem or down toward the leaves. An undirected graph is a network, in which any object or node can be connected to any other object or node, for example a leaf connected to a leaf on another branch by a spider’s web. It remains a challenging problem for machine learning to deal with graphs. It tends to collapse them and assert an order to the nodes which doesn’t exist in the graph, leading to erratic identification. This has been mostly resolved in visual identification and language through RetinaNet and ELMo, but not in more generic machine learning situations. But that’s being worked on too.

In an earlier piece, we talked about the use case of log identification from the air, and how shifting attention spaces or domains a bit would lead to failures of the model. For example, moving from coastal waters off of BC to river waters up the Fraser might require retraining of the model, even though it was the same types of logs in water. Similarly, shifting from BC to the mouth of the Amazon would even more likely lead to failures of the model due to the greater variance in the attention space. Yet humans looking down at the Fraser or Amazon from a plane would have no problem pointing out logs. Now, researchers are looking this and applying it to machine learning systems through isomorphism related to subsets of the features in the attention space. The ability of neural nets to move between adjacent yet different domains is increasing.

The aspect of being able to observe what is not present brings us to the extension point. Isomorphism is the process by which we identify a chair as a chair as a chair regardless of orientation. And it’s the process by which we identify a chair as part of the class of chairs. But it also the more sophisticated process by which we identify an absence in the shape of the thing as the same as a thing. Imagine a chair-shaped void in a block of transparent resin. We would still see a chair.

And that’s the final point in this piece. With machine learning today, isomorphism is now allowing us to identify things even without having them present by the absence and external images that they place in the world around us. Proprietary data is much less proprietary in the age of machine learning. With sufficient data sets adjacent to the proprietary data set, it’s now possible to project a good enough version of the proprietary data. With gait analysis, height analysis, mass analysis, we can now start identifying individuals with sufficient probability that they can be tracked through coarse camera feeds without facial recognition.

This is pretty much the end of the introductory material for machine learning in CleanTechnica, although there might be more. Further articles in this series will continue to dive into use cases for clean technology and the low-carbon transformation that’s occurring. We’ve already published a few. We’ll be returning to the trillion trees story not only because it’s a machine learning story, but also to explore the criticisms and responses. We published on the use of machine learning with IoT for water quality management in industrial and utility applications. And we published on the use of machine learning to automatically lay out commercial rooftop solar. These are just the starters in our exploration of the application of this new toolkit to our pressing challenges and opportunities.

We’re gaining new senses with machine learning. But it is also eliminating our ability to believe that much of what we do is actually private, if it’s observed by anything at all, even through a shape in the data moving through space and time. And it’s eradicating what we think of as proprietary, whether we are aware of it or not.

Featured image: Dassault Consumer Expectation 
 
Follow CleanTechnica on Google News.
It will make you happy & help you live in peace for the rest of your life.




Tags: , ,

is Chief Strategist with TFIE Strategy Inc. He works with startups, existing businesses and investors to identify opportunities for significant bottom line growth and cost takeout in our rapidly transforming world. He is editor of The Future is Electric, a Medium publication. He regularly publishes analyses of low-carbon technology and policy in sites including Newsweek, Slate, Forbes, Huffington Post, Quartz, CleanTechnica and RenewEconomy, and his work is regularly included in textbooks. Third-party articles on his analyses and interviews have been published in dozens of news sites globally and have reached #1 on Reddit Science. Much of his work originates on Quora.com, where Mike has been a Top Writer annually since 2012. He’s available for consulting engagements, speaking engagements and Board positions.

https://cleantechnica.com/feed/