All posts in “Artificial Intelligence”

The damage of defaults

Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.

The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.

But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!

The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!

A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.

Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.

It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.

Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.

Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.

Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.

I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.

Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.

We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.

Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.

The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.

Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.

But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.

Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.

When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)

The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.

But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.

With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.

And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)

Of course AI makes this problem so much worse.

Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.

The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.

Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).

You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.

Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.

Expensive earbuds that won’t stay put is just a handy visual metaphor.

And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.

It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.

Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)

As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.

And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.

The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.

Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.

Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.

Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.

Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.

But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.

So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.

It’s someone’s standard. It’s certainly not mine.

We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.

What’s clear is the future of technology design can’t be so stubborn.

It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.

Above all, it needs a listening ear on the world.

Indifference to difference and a blindspot for diversity will find no future here.

coParenter helps divorced parents settle disputes using A.I. and human mediation

A former judge and family law educator has teamed up with tech entrepreneurs to launch an app they hope will help divorced parents better manage their co-parenting disputes, communications, shared calendar, and other decisions within a single platform. The app, called coParenter, aims to be more comprehensive than its competitors, while also leveraging a combination of A.I. technology and on-demand human interaction to help co-parents navigate high-conflict situations.

The idea for coParenter emerged from co-founder Hon. Sherrill A. Ellsworth’s personal experience and entrepreneur Jonathan Verk, who had been through a divorce himself.

Ellsworth had been a presiding judge of the Superior Court in Riverside County, California for 20 years and a family law educator for ten. During this time, she saw firsthand how families were destroyed by today’s legal system.

“I witnessed countless families torn apart as they slogged through the family law system. I saw how families would battle over the simplest of disagreements like where their child will go to school, what doctor they should see and what their diet should be — all matters that belong at home, not in a courtroom,” she says.

Ellsworth also notes that 80 percent of the disagreements presented in the courtroom didn’t even require legal intervention – but most of the cases she presided over involved parents asking the judge to make the co-parenting decision.

As she came to the end of her career, she began to realize the legal system just wasn’t built for these sorts of situations.

She then met Jonathan Verk, previously EVP Strategic Partnerships at Shazam and now coParenter CEO. Verk had just divorced himself and had an idea about how technology could help make the co-parenting process easier. He already had on board his longtime friend and serial entrepreneur Eric Weiss, now COO, to help build the system. But he needed someone with legal expertise.

That’s how coParenter was born.

The app, also built by CTO Niels Hansen, today exists alongside a whole host of other tools built for different aspects of the coparenting process.

That includes those apps designed to document communication like OurFamilyWizard, Talking Parents, AppClose, and Divvito Messenger; those for sharing calendars, like Custody Connection, Custody X Exchange, Alimentor; and even those that offer a combination of features like WeParent, 2houses, SmartCoparent, and Fayr, among others.

But the team at coParenter argues that their app covers all aspects of coparenting, including communication, documentation, calendar and schedule sharing, location-based tools for pickup and dropoff logging, expense tracking and reimbursements, schedule change requests, tools for making decisions on day-to-day parenting choices like haircuts, diet, allowance, use of media, etc., and more.

Notably, coParenter also offers a “solo mode” – meaning you can use the app even if the other co-parent refuses to do the same. This is a key feature that many rival apps lack.

However, the biggest differentiator is how coParenter puts a mediator of sorts in your pocket.

The app begins by using A.I., machine learning, and sentiment analysis technology to keep conversations civil. The tech will jump in to flag curse words, inflammatory phrases and offense names to keep a heated conversation from escalating – much like a human mediator would do when trying to calm two warring parties.

When conversations take a bad turn, the app will pop up a warning message that asks the parent if they’re sure they want to use that term, allowing them time to pause and think. (If only social media platforms had built features like this!)

When parents need more assistance, they can opt to use the app instead of turning to lawyers.

The company offers on-demand access to professionals as both monthly ($12.99/mo – 20 credits, or enough for 2 mediations) or yearly ($119.99/year – 240 credits) subscriptions. Both parents can subscribe for $199.99/year, each receiving 240 credits.

“Comparatively, an average hour with a lawyer costs between $250 and upwards of $200, just to file a single motion,” Ellsworth says.

These professionals are not mediators, but are licensed in their respective fields – typically family law attorneys, therapists, social workers, or other retired bench officers with strong conflict resolution backgrounds. Ellsworth oversees the professionals to ensure they have the proper guidance.

All communication between the parent and the professional is considered confidential and not subject to admission as evidence, as the goal is to stay out of the courts. However, all the history and documentation elsewhere in the app can be used in court, if the parents do end up there.

The app has been in beta for nearly a year, and officially launched this January. To date, coParenter claims it’s already helped to resolve over 4,000 disputes and over 2,000 co-parents have used it for scheduling. 81 percent of the disputing parents resolved all their issues in the app, without needed a professional mediator or legal professional, the company says.

CoParenter is available on both iOS and Android.

This robot can park your car for you

French startup Stanley Robotics showed off its self-driving parking robot at Lyon-Saint-Exupéry airport today. While I couldn’t be there in person, the service is going live by the end of March 2019. And here’s what it looks like.

[embedded content]

The startup has been working on a robot called Stan. These giant robots can literally pick up your car at the entrance of a gigantic parking lot and then park it for you. You might think that parking isn’t that hard, but it makes a lot of sense when you think about airport parking lots.

Those parking lots have become one of the most lucrative businesses for airport companies. But many airports don’t have a ton of space. They keep adding new terminals and it is becoming increasingly complicated to build more parking lots.

That’s why Stanley Robotics can turn existing parking lots into automated parking areas. It’s more efficient as you don’t need space to circulate between all parking spaces. According to the startup, you can create 50 percent more spaces in the same surface area.

If you’re traveling for a few months, Stan robots can put your car in a corner and park a few cars in front of your car. Stan robots will make your car accessible shortly before you land. This way, it’s transparent for the end user.

At Vinci’s Lyon airport, there will be 500 parking spaces dedicated to Stanley Robotics. Four robots will work day in, day out to move cars around the parking lot. But Vinci and Stanley Robotics already plan to expand this system to up to 6,000 spaces in total.

According to the airport website, booking a parking space for a week on the normal P5 parking lot costs €50.40. It costs €52.20 if you want a space on P5+, the parking lot managed by Stanley Robotics.

Self-driving cars are not there yet because the road is so unpredictable. But Stanley Robotics has removed all the unpredictable elements. You can’t walk on the parking lot. You just interact with a garage at the gate of the parking. After the door is closed, the startup controls the environment from start to finish.

Now, let’s see if Vinci Airports plans to expand its partnership with Stanley Robotics to other airports around the world.

Tiny claws let drones perch like birds and bats

Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.

The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.

As the researchers write:

In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].

“Perching,” you say? Go on…

We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.

This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.

Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.

The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.

Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B.

If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission.

I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.

The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.

The adversarial persuasion machine: a conversation with James Williams

James Williams may not be a household name yet in most tech circles, but he will be.

For this second in what will be a regular series of conversations exploring the ethics of the technology industry, I was delighted to be able to turn to one of our current generation’s most important young philosophers of tech.

Around a decade ago, Williams won the Founder’s Award, Google’s highest honor for its employees. Then in 2017, he won an even rarer award, this time for his scorching criticism of the entire digital technology industry in which he had worked so successfully. The inaugural winner of Cambridge University’s $100,000 “Nine Dots Prize” for original thinking, Williams was recognized for the fruits of his doctoral research at Oxford University, on how “digital technologies are making all forms of politics worth having impossible, as they privilege our impulses over our intentions and are designed to exploit our psychological vulnerabilities in order to direct us toward goals that may or may not align with our own.” In 2018, he published his brilliantly written book Stand Out of Our Light, an instant classic in the field of tech ethics.

In an in-depth conversation by phone and email, edited below for length and clarity, Williams told me about how and why our attention is under profound assault. At one point, he points out that the artificial intelligence which beat the world champion at the game Go is now aimed squarely — and rather successfully — at beating us, or at least convincing us to watch more YouTube videos and stay on our phones a lot longer than we otherwise would. And while most of us have sort of observed and lamented this phenomenon, Williams believes the consequences of things like smartphone compulsion could be much more dire and widespread than we realize, ultimately putting billions of people in profound danger while testing our ability to even have a human will.

It’s a chilling prospect, and yet somehow, if you read to the end of the interview, you’ll see Williams manages to end on an inspiring and hopeful note. Enjoy!

Editor’s note: this interview is approximately 5,500 words / 25 minutes read time. The first third has been ungated given the importance of this subject. To read the whole interview, be sure to join the Extra Crunch membership. ~ Danny Crichton

Introduction and background

Greg Epstein: I want to know more about your personal story. You grew up in West Texas. Then you found yourself at Google, where you won the Founder’s Award, Google’s highest honor. Then at some point you realized, “I’ve got to get out of here.” What was that journey like?

James Williams: This is going to sound neater and more intentional than it actually was, as is the case with most stories. In a lot of ways my life has been a ping-ponging back and forth between tech and the humanities, trying to bring them into some kind of conversation.

It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straight

I spent my formative years in a town called Abilene, Texas, where my father was a university professor. It’s the kind of place where you get the day off school when the rodeo comes to town. Lots of good people there. But it’s not exactly a tech hub. Most of my tech education consisted of spending late nights, and full days in the summer, up in the university computer lab with my younger brother just messing around on the fast connection there. Later when I went to college, I started studying computer engineering, but I found that I had this itch about the broader “why” questions that on some deeper level I needed to scratch. So I changed my focus to literature.

After college, I started working at Google in their Seattle office, helping to grow their search ads business. I never, ever imagined I’d work in advertising, and there was some serious whiplash from going straight into that world after spending several hours a day reading James Joyce. Though I guess Leopold Bloom in Ulysses also works in advertising, so there’s at least some thread of a connection there. But I think what I found most compelling about the work at the time, and I guess this would have been in 2005, was the idea that we were fundamentally changing what advertising could be. If historically advertising had to be an annoying, distracting barrage on people’s attention, it didn’t have to anymore because we finally had the means to orient it around people’s actual intentions. And search, that “database of intentions,” was right at the vanguard of that change.

The adversarial persuasion machine

Photo by joe daniel price via Getty Images

Greg: So how did you end up at Oxford, studying tech ethics? What did you go there to learn about?

James: What led me to go to Oxford to study the ethics of persuasion and attention was that I didn’t see this reorientation of advertising around people’s true goals and intentions ultimately winning out across the industry. In fact, I saw something really concerning happening in the opposite direction. The old attention-grabby forms of advertising were being uncritically reimposed in the new digital environment, only now in a much more sophisticated and unrestrained manner. These attention-grabby goals, which are goals that no user anywhere has ever had for themselves, seemed to be cannibalizing the design goals of the medium itself.

In the past advertising had been described as a kind of “underwriting” of the medium, but now it seemed to be “overwriting” it. Everything was becoming an ad. My whole digital environment seemed to be transmogrifying into some weird new kind of adversarial persuasion machine. But persuasion isn’t even the right word for it. It’s something stronger than that, something more in the direction of coercion or manipulation that I still don’t think we have a good word for. When I looked around and didn’t see anybody talking about the ethics of that stuff, in particular the implications it has for human freedom, I decided to go study it myself.

Greg: How stressful of a time was that for you when you were realizing that you needed to make such a big change or that you might be making such a big change?

James: The big change being shifting to do doctoral work?

Greg: Well that, but really I’m trying to understand what it was like to go from a very high place in the tech world to becoming essentially a philosopher critic of your former work.

James: A lot of people I talked to didn’t understand why I was doing it. Friends, coworkers, I think they didn’t quite understand why it was worthy of such a big step, such a big change in my personal life to try to interrogate this question. There was a bit of, not loneliness, but a certain kind of motivational isolation, I guess. But since then, it’s certainly been heartening to see many of them come to realize why I felt it was so important. Part of that is because these questions are so much more in the foreground of societal awareness now than they were then.

Liberation in the age of attention

Greg: You write about how when you were younger you thought “there were no great political struggles left.” Now you’ve said, “The liberation of human attention may be the defining moral and political struggle of our time.” Tell me about that transition intellectually or emotionally or both. How good did you think it was back then, the world was back then, and how concerned are you now?

What you see a lot in tech design is essentially the equivalent of a circular argument about this, where someone clicks on something and then the designer will say, “Well, see, they must’ve wanted that because they clicked on it.”

James: I think a lot of people in my generation grew up with this feeling that there weren’t really any more existential threats to the liberal project left for us to fight against. It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straight and get a good job and keep recycling and try not to crash the car as we cruise off into this ultra-stable sunset at the end of history.

What I’ve realized, though, is that this crisis of attention brought upon by adversarial persuasive design is like a bucket of mud that’s been thrown across the windshield of the car. It’s a first-order problem. Yes, we still have big problems to solve like climate change and extremism and so on. But we can’t solve them unless we can give the right kind of attention to them. In the same way that, if you have a muddy windshield, yeah, you risk veering off the road and hitting a tree or flying into a ravine. But the first thing is that you really need to clean your windshield. We can’t really do anything that matters unless we can pay attention to the stuff that matters. And our media is our windshield, and right now there’s mud all over it.

Greg: One of the terms that you either coin or use for the situation that we find ourselves in now is the age of attention.

James: I use this phrase “Age of Attention” not so much to advance it as a serious candidate for what we should call our time, but more as a rhetorical counterpoint to the phrase “Information Age.” It’s a reference to the famous observation of Herbert Simon, which I discuss in the book, that when information becomes abundant it makes attention the scarce resource.

Much of the ethical work on digital technology so far has addressed questions of information management, but far less has addressed questions of attention management. If attention is now the scarce resource so many technologies are competing for, we need to give more ethical attention to attention.

Greg: Right. I just want to make sure people understand how severe this may be, how severe you think it is. I went into your book already feeling totally distracted and surrounded by totally distracted people. But when I finished the book, and it’s one of the most marked-up books I’ve ever owned by the way, I came away with the sense of acute crisis. What is being done to our attention is affecting us profoundly as human beings. How would you characterize it?

James: Thanks for giving so much attention to the book. Yeah, these ideas have very deep roots. In the Dhammapada the Buddha says, “All that we are is a result of what we have thought.” The book of Proverbs says, “As a man thinketh in his heart, so is he.” Simone Weil wrote that “It is not we who move, but images pass before our eyes and we live them.” It seems to me that attention should really be seen as one of our most precious and fundamental capacities, cultivating it in the right way should be seen as one of the greatest goods, and injuring it should be seen as of the greatest harms.

In the book, I was interested to explore whether the language of attention can be used to talk usefully about the human will. At the end of the day I think that’s a major part of what’s at stake in the design of these persuasive systems, the success of the human will.

“Want what we want?”

Photo by Buena Vista Images via Getty Images

Greg: To translate those concerns about “the success of the human will” into simpler terms, I think the big concern here is, what happens to us as human beings if we find ourselves waking up in the morning and going to bed at night wanting things that we really only want because AI and algorithms have helped convince us we want them? For example, we want to be on our phone chiefly because it serves Samsung or Google or Facebook or whomever. Do we lose something of our humanity when we lose the ability to “want what we want?”

James: Absolutely. I mean, philosophers call these second order volitions as opposed to just first order volitions. A first order volition is, “I want to eat the piece of chocolate that’s in front of me.” But the second order volition is, “I don’t want to want to eat that piece of chocolate that’s in front of me.” Creating those second order volitions, being able to define what we want to want, requires that we have a certain capacity for reflection.

What you see a lot in tech design is essentially the equivalent of a circular argument about this, where someone clicks on something and then the designer will say, “Well, see, they must’ve wanted that because they clicked on it.” But that’s basically taking evidence of effective persuasion as evidence of intention, which is very convenient for serving design metrics and business models, but not necessarily a user’s interests.

AI and attention

STR/AFP/Getty Images

Greg: Let’s talk about AI and its role in the persuasion that you’ve been describing. You talk about, a number of times, about the AI behind the system that beat the world champion at the board game Go. I think that’s a great example and that that AI has been deployed to keep us watching YouTube longer, and that billions of dollars are literally being spent to figure out how to get us to look at one thing over another.