Latest News

How we can build AI to help humans, not hurt us | Margaret Mitchell

  • Facebook
  • Twitter
  • Google+
  • LinkedIn

As a research scientist at Google, Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI — and asks us to consider what the technology we create today will mean for tomorrow. "All that we see now is a snapshot in the evolution of artificial intelligence," Mitchell says. "If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now."

Check out more TED Talks:

The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more.

Follow TED on Twitter:
Like TED on Facebook:

Subscribe to our channel:

52 Comments on How we can build AI to help humans, not hurt us | Margaret Mitchell

  1. rahul raj show // 12th March 2018 at 1:56 pm // Reply

    proudly first one to like nd comment

  2. If we can manage AI well, we could create a perfect utopian world and live lives beyond what we can currently comprehend. Maximal pleasure of all kinds in simulated lives with AI.

    • Valinax Who is “we”?

    • Getting up 4 Christ // 12th March 2018 at 5:22 pm // Reply

      Valinax I know the name of Jesus Christ can bind and cast out any evil spirits of the unseen world.
      Good day.

    • Getting up 4 Christ You don’t. You’re just deluded and believe in nonsense. Jesus was a normal man, not a god. There are no “evil spirits” or “spirits” of any sort.

    • Stuart Young // 12th March 2018 at 6:26 pm // Reply

      I’ve been saying this. Most probably it’ll be a distopian future where those that have the resources will win all. The rest of us will die. On the upside, it’ll be really good for the planet.

    • “Maximal pleasure of all kinds in simulated lives”
      So an AI Utopia is a neverending VR/AR Roman-style pleasure feast?!
      Interesting futurescape…

  3. Hari Shankar // 12th March 2018 at 2:00 pm // Reply

    By not building ai

    • Ti Mo Like she says in the video, it is our choice that leads to what ai will become. Anything is possible with it. Yes the Elon Musk quote was correct in saying it is the biggest risk for humanity but it is also the biggest reward. About the Hawking quote yes it could end man kind, but we are already doing damage ourselves. Global warming, war, famine I don’t know What else to say mate.

    • Yea, but in my opinion the problem is, there is no “us”, mankind is not united at all, but rather exploiting each other. So whose choice will it be? I think 7+ billion people with different agendas taking on a task like building human-like AI is not controllable and doesn’t help anyone. Even more because the distribution of power is totally imbalanced. It’s just that people always wanted an equally intelligent being to interact with. That’s why everyone wants AI, and because of our weird dream of “living in the future”. And from the technological singularity on it will all be uncontrollable anyways.

    • as absurd as refusing to have children because they may kill you.

    • It’s a black hole for sure.

    • Sarah Rocksdale // 12th March 2018 at 8:51 pm // Reply

      HAHA! True

  4. Hoàng Kim Việt // 12th March 2018 at 2:04 pm // Reply

    After watching this video, I remember the quote of mother of Forrest Gump: “Life was like a box of chocolates. You never know what you’re gonna get.” We don’t know what will be for people when we live with AI, but I hope everything will be OK :0

  5. Aaron565pwns // 12th March 2018 at 2:14 pm // Reply

    AI is built by closed groups of individuals for their own benefit, like corps and govt. it will be used to enslave you, total intellectual failure in the premise. they are already gathering mass data and using it for free, you are contributing towards AI that can draw faster conclusions than the masses ever could and they will be in the hands of actionable groups who are already acting with impunity.

    • Aaron565pwns true AI would not be anyone’s tools; machine learning software or devices, yes. True AI is like the Praying Mantis that at some point stops worrying about your hands and boots and looks you right in the eye and figures on what to do about -YOU.- And all the bureaus, agencies, gov’ts. and corp’ns. won’t be able to stop it.

  6. Newbies Obnixus // 12th March 2018 at 2:16 pm // Reply

    Future is future .Justed do your best today.

  7. What a terribly written talk. It’s all over the place and lacks any focus whatsoever.

  8. Ruben Jimenez // 12th March 2018 at 2:26 pm // Reply

    Evil will find a way to manipulate and use AI to it’s benefit and that’s a promisse

    • Sudhanshu Sharma // 12th March 2018 at 7:50 pm // Reply

      Ruben Jimenez Exactly! No matter how smart & secure the coding or computer is, hackers will always find loopholes to take a advantage of it.

  9. The dog is not _that_ cute…

    • Penny Lane reported

    • Luka O., what did I report?

    • LMAO!
      Either it’s sarcastic.. OR, “Reported” up there failed to realize that you’re referring to a literal line from the video where she states that a “…programmed ai is capable of noticing that the dog [in the picture] is cute.” And mistakenly believes that you’re referring to the woman doing the speech as a “dog” who “isn’t that cute”.

      Which quite frankly is ridiculous, since she’s exceptionally pretty.
      I *really* want to believe that it’s the prior, and that someone didn’t just immediately jump to [a wrong] conclusion, and then got ‘Thumbs Up’d’ by 6 additional people who also failed to understand the meaning behind your comment…
      But I sadly believe the latter to likely be the case. 🙁

      Which…quite frankly.. speaks VOLUMES about the people who are “defending” the speaker, that they saw someone mention a “dog” and IMMEDIATELY ASSUMED that it MUST be referring to the woman on stage.

    • Why would I call her a dog? That makes literally no sense. Not even as sarcasm. People are weird.

  10. not my proudest fap

  11. She is mostly talking about “smart” software, not real artificial intelligence.
    Years ago we used to believe that a computer needs to be intelligent in order to beat a person at chess. Today we easily achieve this goal by using algorithms that prune trees of possible chess options, given a specific situation on the board.
    We used to believe that computers would need to be intelligent in order to beat us at more complex games like Go or Jeopardy. And while the algorithms and neuronal networks that achieved these goals certainly are amazing, they are not intelligent.
    We tend to call everything “AI” today. The oponents in your favorite video games are “AI”‘s. The voice assistant on your phone is an “AI”. “AI” is used to “understand” images that are shown to it.

    None of this is actual artificial intelligence. We use algorithms or neuronal networks to achieve these tasks. And while neuronal networks do “learn”, in the end what we do is give them a huge amount of data, classify it for them, and then let them guess the correct classification for images they have not seen. It takes tens or hundreds of thousands of images to “train” such a network. The result is a tool that can differentiate between a dog and a cat in … most cases. However, it cannot do anything else. It is not “intelligent”. It is only a network of numbers that were optimized in such way, that an image that enters the network is put into the category “dog”, “cat” or “none”.
    Of course this is a rather simple example. However, computers “understand” your voice in the exact same way. Audio data is broken down into a vector of information, which enters networks to recognize specific letters and then build the most logical sentence out of the most probable letters it recognized. It doesnt “understand” what you said. It only analyses input data and maps that to the most probable result.

    My point is.. we dont even know what “intelligence” or “consciousnes” is. We really have no clue.
    Just putting together a ton of classification algorithms wont result in a conscious computer program.
    However, i am not saying that there is no reason to worry about AI or that we should not think about possible problems that might arise in the future.
    The moment we create the first artificial general intelligence (meaning an artificial intelligence on the cognitive level of a human, or baby, meaning it can truly learn and improve itself) might only be minutes, hours, or weeks away from the point where it improves itself into the first artificial super intelligence. Such an intelligence would to us seem like a god. It could do things we couldnt even imagine, and from that point on it does not need us anymore in the slightest.
    We dont know what would happen at that point. An intelligence probably needs some kind of a goal, or a reason. Without a reason (like eating), we would never do anything and thus not learn anything. It is reasonable to assume that an artificial intelligence would also need some kind of a reason. People seem to assume that an artificial super intelligence would, for example, reach the conclusion, that humans are bad, and then begins to exterminate us. Sure, it might seem that way. After all we are destroying this planet, right? Well… that’s the thing. Does this matter from the perspective of a god? Our planet is like a tiny spec of dust in the vastenes of space. The AI might just be driven by its ability to learn new things. As such it would research at an incredible speed. It might also be grateful for being created and use some of its capacity to improve life on earth. Who knows. These are the things we need to think about. And these are the things we need to be prepared for. Any “AI” that goes into the direction of becoming an artificial general intelligence needs to have the right “motivation” for existing.

    • SadamFlu because building algorithms and reaching the point where programs can differentiate between different things definitely won’t play a part in the creation of true AI.

    • +Arjun Satheesh +Fabian H.
      Thanks to you both for saving me a lot of typing. Arjun I believe identified the key danger – not that we might inculcate genuine AI, or even limited machine-learning information tools with our own cognitive biases; but more seriously, that we might produce true AI without any biases of any kind, including universal values affirmed by the mass of humanity: justice, compassion, patience, loyalty – then we would have a real-life Lawnmower Man or MCP . Worse still – that we might do so without knowing when or how we did it.

    • I read this comment instead of watching the talk. Turns out it was more interesting.

    • it is a phase of AI, it is weak AI but it’s still AI. the next step is not even close to our level of intelligence, but could still be somewhat dangerous. a better AI, then the one we have today, would be able to play warcraft thanks to its knowledge of how to play starcraft, or use it’s knowledge in chess to play alpha go, instead of having to start from the beginning. a few steps after that comes strong AI, an AI with it’s own mind. all of those things are still artificial intelligences, even if the first steps of AI do not have feelings or a mind of their own.

      but even if you know what AI is, talking about all of the types of AI as if they were the same is not wise. weak AI is not in any way dangerous, as long as we are the ones controlling what it has access to. but a strong AI would probably be able to hack its way out of a firewall if that’s what it wanted to do. even an AGI (artificial general intelligence) would be somewhat dangerous since we don’t really know what it knows or think it knows about new subjects when it is faced with them.

  12. Im studying Marketing at university and i found out AI is something that will always harm humans. Businesses dont care about people loosing jobs, AI will do it quicker and do not need to get paid, this is a cheaper thing for businesses to do. Its the sad truth and now I’m even more scared for the future of myself as well as other people. I think in the future there will be less jobs for doctors, nurses, accountants etc. But there are certain things AI cannot do, but who knows, in the future AI may be just like us

  13. Stijn Vogels // 12th March 2018 at 5:22 pm // Reply

    AI is not a car, as stated at the end of the video. Present-day AI is more like a child, learning to recognise shapes and figures.
    What’s that? A dog. “Cute.” A woman. “She looks happy”. A burning building. “Cool.” That’s what a kid would say. AI is not a car. It’s learning to grow like a human, because it’s being taught by humans. If you want to teach it to make moral choices, the AI will have to learn and play. Where is its playground and who will be its playfriends? Better teach it well, before it becomes a moody teenager.

  14. The Mean Arena // 12th March 2018 at 5:51 pm // Reply

    I bet she looks great in a bikini!

    • Chocolate Moose // 12th March 2018 at 8:26 pm // Reply

      The Mean Arena just why?

    • Chocolate Moose – Likely because she is gorgeous.
      To my eyes, she is beautifully curvey, fair skinned with red hair and has the kind of light blue eyes that would be easy to get lost in.

      I’m not (personally) saying that those traits are any more important than who she is as a person (her personality).

      All I’m saying is that they are noticable in a positive manner. 🙂

    • Well, if that’s the dominant thought that came to your mind, well and good. I just have to wonder why you’re watching TED talks?

      Honestly, I do think so too though. Curvalicious!

    • Wtf does that even have to do with what’s she’s talking about??

    • The Mean Arena // 13th March 2018 at 12:22 am // Reply

      It has nothing to do with what she’s talking about. Nobody said we needed to talk about what she’s talking about. She’s obviously smart and on top of that, good looking to me. So yes, I bet she would look great in a bikini! If you want to talk about AI, then let’s talk about it. Did I mention how good I think she’d look in a bikini?

  15. I don’t understand the title of the video. It implies that we have been making AI with the intent to hurt humans in the first place.

  16. Why do we need to teach computers human feelings or human emotion or human mannerisms? There is no benefit to them knowing this sort of thing, I believe in AI but I believe in purpose-built AI. Eg: Giving it a human made concept and have it iterate generations worth of ideas. An AI that can determine how we feel and predict what we’re about to do next is no AI we need. The only reasons to teach a computer feelings/emotions is to build a conscious one which is what Steve Hawking etc are against.

  17. She may know her field, but she’s neither a Maven nor a good presenter. No command of her topic or the audience. She sounds like either a high-schooler or an academic expert condescending to a lay audience.

Leave a comment

Your email address will not be published.


Share This