Almost 5 years ago, Mother Jones[1] published a compelling vision of how quickly Artificial Intelligence (AI) could take over our jobs and completely shift the paradigm of our economy. One of the most convincing elements of this dystopian future is the exponential acceleration with which digital machines (computers) are catching up to the power of the human brain. If the projection is accurate, we’re halfway through the last 15 years of that exponential acceleration, which is when 99.95% of this catchup occurs.

While there’s plenty to debate about the prediction, from how the power of the human brain is measured to whether we even understand enough about the human brain to quantify it, there’s no denying that the power of computing is increasing at an astounding pace. Delegating the unsavory parts of life to machines is provocative, but the realization machines could replace parts of life where we find joy and meaning, like work and caring for others, can be frightening.

Already, my new car is a little too concerned with entertaining me, thinking for me, and protecting me from myself, seemingly in that order. It’s actually the car’s designers and corporate legal team projecting their combined agenda through the car. About a month after I got the car, I was driving my 12-year old son to baseball. Suddenly, he burst into uncontrollable laughter, pointing at the center console screen while shrieking his amusement. When I had a moment to look — safely — I read the following:

Taking your eyes off the road too long or too often while using this system could cause a crash, resulting in injury or death to you or others. Focus your attention on driving.

First off, I can’t tell you how immeasurable my parental pride is when my offspring immediately and completely understand irony. Secondly, are you kidding me? I was most certainly watching the road. What even provokes this message? Almost a year into owning the car, I have no clue why it appears every few weeks. Adding to the irony, I have to tap OK to dismiss it.

If we want to truly deliver on the promise of a better world with AI, we need to be quite curious, conscientious, and considerate about how we design, develop, and deploy it. Not just once all this intelligence is firmly in place and not as token gestures either. So far, we haven’t done enough to make AI experiences as worthy as they can be with the tools and technology we already have. Building good habits now for creating and distributing AI will enable us to do so, even more maturely, later.

Around the same time AI was being named, a Hindu spiritual teacher from India offered us all some compelling advice:

“Put your heart, mind, and soul into even your smallest acts. This is the secret of success.” —Swami Sivananda (Hindu spiritual teacher, 1887–1963)

His thought, shared by many who came before and after him, suggests we use more than just our minds in our pursuits. I can think of no better place to heed this advice than in the manufacture of AI.

Sure, autonomous cars will leapfrog issues like my car mistakenly thinking I’m not watching the road, but AI transformation isn’t like flipping a switch. It takes time and effort, with mistakes and complete failures along the way. Humans don’t adapt exponentially. Since most things we know will be part of this AI transformation, most things will be fraught with additional conflict before resolution. Through that conflict, we can find ways to bring more heart and soul into AI. For example, a fellow passenger would never use so many words to say “watch the road!” In fact, he or she would likely — at least at first — frame such a concern as a question, “are you watching the road?”

Are there similar ways a car can act? Is this a condition where a car shouldn’t act? Sometimes heart and soul shouldn’t be artificial at all and humans should be relying on AI for just computational brainpower. It’s all an evolving balance where we still have choice.

Going even deeper than Swami Sivananda’s statement, the Japanese concept of “kokoro” indivisibly connects mind, heart, and spirit.[2] Having three separate words that need to be connected with “and” suggests divisions in English, and likely most of western culture, that don’t exist in Japanese, and perhaps most of eastern culture. However, even realizing an interplay between mind, heart, and spirit can help us. Researchers at Kyoto University and Nanzan University are beginning to understand, scientifically, what the “creatives” of many cultures have long known: a connection between mind, heart, and soul. Since 1993, experts in religion, philosophy, and science, have been looking to connect their areas of knowledge, usually considered separately, to better understand our humanness. Their study seems to support Swami Sivananda’s approach:

“Thoughts, feelings, and desires, or will, are all interrelated aspects of what it means to be human, and we would be wise to take all of them, and their interrelationship, into account in order to understand human experience.” —Paul Swanson (Professor of Humanities, Nanzan University)

My car’s artificially intelligent thoughts, feelings, and desires are far from well-developed or humanly subtle. In fact, they’re immature and annoying. For most of us, driving doesn’t need 100% of our focus. We’re able to think, sing along to the radio, and carry on a conversation with passengers. When I hop in my car, I’m usually mid-thought about something other than driving but can easily manage starting the car, fastening my seatbelt, and backing out of the driveway. Let’s look at my car’s artificially intelligent thoughts, feelings and desires as I do so.

Concern Reality

My car’s will is to protect me, even in conditions it never needs to. Unfortunately, this artificial intent has the opposite of its desired effect. My car confuses, distracts, and — maybe worst — desensitizes me through frequent and repetitive warnings, most of them unnecessary. Hello, designers and legal team: is that what you want?

A human passenger, with common sense, intuition, and the human desire to understand me (and to live another day) has a better idea of when and how to communicate with a driver, especially when navigating traffic. One sweet spot for AI then is clearly somewhere between yesterday’s zero intelligence and today’s immature and annoying hyper vigilance:

“Until ‘smart’ gets smarter, dumber is better” —Emily Wengert (Group VP for User Experience, Huge LLC)[3]

So let’s make some good choices. Where patterns and boundaries are well-established, understood, and important — like taking inventory in a retail store or capturing and preparing financial information for taxes — the non-intuitive, non-desirous, and non-chaotic aspects of AI will perform well. But where our appetite is great for boundless novelty, creativity, and diversity — like disruptive business, sport, or arts ideas — humans will perform far better. When humans have the basics of food, water, and shelter, their attention rapidly turns to sense of innovation, social interaction, and fun. If we’re looking for machines to run our world on their own and for humans to passively exist, then full speed ahead on AI and the intellectual pursuits underway. However, if we’re looking for a world where humans have purpose, can continue to better themselves and humanity, and contribute to the novelty, creativity, and diversity we crave, then let’s get on board with putting our hearts, minds, and souls into even our smallest AI acts as well.

How do we do that?

First off, we need to embrace some mystery and ambiguity, like just what heart and soul means, and we need to recognize that everyone won’t completely share the same meanings either. The 7.6 billion people on earth have different values and beliefs and cultures.

One of the mysteries to embrace is how the heart is connected to the brain. And even how we talk about the heart, an organ in our chests, as having feelings beyond its pace, rhythm, and intensity. It turns out there are more signals from our hearts to our brains than the other way around. And, unlike the way a healthy heart is typically portrayed, heartbeats have irregular rhythms. They adapt to what is going on and to what is asked of them. The 40,000 neurons responsible for much of this are physically located in our hearts; not our brains. They can sense, feel, learn, and remember. Scientists have determined that the heart not only responds to emotion, but that the signals generated by its rhythmic activity play a major role in determining our emotional experience.

Even more mysterious is our soul. We don’t even have an organ in our bodies corresponding to “soul” and yet — beyond our minds and hearts — many people agree there’s something in each of us that guides, drives, and inspires us. Creatives, philosophers, and others — including even some scientists — refer to the soul. And each of us has some inner self we embrace, speak with, and trust. Breathing might be a reasonable representation of soul. Like the heart and mind, it’s a core presence of our lives, whether awake, asleep, or in meditation. For those who believe in Heaven, the soul is the “thing” that goes on to the afterlife. For those who believe in reincarnation, the soul is what migrates from life to life.

I think curiosity, conscientiousness, and consideration reside within the soul; imagination, improvisation, and intuition as well. Machines can’t really do these things now and maybe they never will. Even if they do, I don’t think artificial versions of these traits have purpose. That a machine can thoroughly analyze the information it’s fed, recognize patterns and play them back, even using predictive analytics, is not the same thing as a musician, let’s say, using his or her imagination, improvisation, and intuition to create something inspiring and beautiful with other humans.

Assuming we accept mind, heart, and soul working together (or even true kokoro), how do we put our hearts, minds, and souls into even our smallest AI acts? This could be as simple as a reasonable, single reminder tone for an unfastened seatbelt (once the car is in gear) with escalating followup (if needed).

The way we manufacture AI should be an extension of how we research, design, and test today. In short, we need to research to understand and empathize with the people we intend to serve, but beyond human-to-human and human-to-machine, and beyond today’s common keyboard, mouse, touch, and voice interactions. For every AI interaction we consider, we need to gain an understanding for not just what, where, when, why, and how someone is doing something at macro and micro levels but how the mind, heart, and soul are working together. If my car navigation system is asking me for input while I’m dealing with a gnarly traffic situation, it should understand this and have the patience to wait, or at least respond to my quick “hang on.” A human navigator sitting next to me would do so, naturally. His or her heart and soul would work with mine to protect, think, and entertain…flipping the priorities my car seems to have.

While big data will give us a wealth of hard data to research and analyze for keen insights, we also need to dig in deeply and personally with those we serve, for softer data. We need to be careful not to design only to the average or to the vast majority but to the edge and corner cases as well. For example, there are more than 250 million cars and trucks on U.S. roads. Every 1% of that of 250 million is still 2.5 million. Designing for only 99% ignores millions. If we’re talking about drivers suffering from immature AI while driving, that’s too dangerous.

From this greater understanding and more complete approach, we will be able to better designate who — machine or human — does which parts of what, where, when, why, and how. As we do today, continually learning, designing, testing, and refining will be key. For anyone already in the business of creating something for others who isn’t researching, designing, testing, and refining, it’s time to catch up or you’ll be woefully behind very soon. For example, if you called it quits with your Early 2007 website, you wouldn’t be in business on smartphones today. There is no “done” in an evolving world.

Take a moment to think about today’s ambiguity and conflict in your worlds of work and elsewhere. Then imagine, like the acceleration of computing power, that these ambiguities and conflicts will multiply, exponentially. I encourage you to embrace the ambiguity and conflict to resolution. As a decisionmaker, if you find yourself saying “let’s just do what we usually do” or “let’s do what everyone else does,” you’ve likely found ambiguity and conflict to wrestle. If we let AI work itself out, we’ll have tomorrow’s equivalent of today’s tragically comic machine errors…exponentially…and with even less recourse than today.

I’ve been fortunate to experience ambiguity, conflict, and resolution in life, at work, and through music. And it’s the music, especially jazz, that provides the most applicable models and frameworks for my life and work. To sit in with a group of musicians you’ve never met and to figure out what to play, where to take the music, and how to create a musical whole that is greater than the sum of its parts, is what we’re all trying to do on a larger scale in life and work. So acknowledging, understanding, and developing imagination, improvisation, and intuition is essential for humans and fundamental to putting our hearts, minds, and souls into even our smallest AI acts.

[1] Kevin Drum, “Welcome Robot Overlords. Please Don’t Fire Us,” Mother Jones, May/June 2013.
[2] Ephrat Livni, “This Japanese word connecting mind, body and spirit is also driving scientific discovery,” Quartz, April 6, 2017.
[3] Emily Wengert, “The New Kid Defense: The Algorithm Made Me Do It,” Magenta, October 5, 2017.