I wrote these AI Principles in 2014. Did they stand the test of time? 👀

Laura Rodriguez
Prototypr
Published in
14 min readJun 23, 2020

--

It was 2014, I was a User Experience Designer at IBM and we were coming out of an AI winter. Natural language processing was becoming ubiquitous, showing up in phones and multiple consumer-facing experiences.

At the time, I was asked by my Design Director to proactively investigate the potential of AI within our collaboration suite of products (think e-meetings, chat, email, file sharing, profiles, etc). It ended up being some of the most formative years of my life. I started to identify as someone who wanted to be a thought leader in this space, intrinsically motivated to create frameworks around something we were actively understanding.

“Without a framework it’s all random behavior.” — Phil Gilbert, IBM Design, General Manager

Fast forward to today, I was asked by a former colleague if I still had that set of Design Principles for someone to reference in their project. Curious, I dug around my files and found a .HTML doc that loaded a skeleton version of the website I used to socialize. When it loaded I realized I was looking at a time capsule from 6 years ago—which is eons in the tech world.

So, out of fun, I am going to post them here while posing the question: Do you think these are still relevant? I’ll withhold my judgment until the very end. Note: At the time IBM referred to artificial intelligence as cognitive computing.

Here we go! 😬🤞

01 /

The cognitive system should feel human-like, aligning with my natural communication patterns.

Not present itself as an actual human or character.

Why?

When it comes to cognitive systems, we want to leverage the best parts of being human (by privileging natural language) with the best parts of being a computer (a methodical and tireless digital assistant). We do not define success by our ability to trick people into thinking they’re interacting with an actual human. That comes with a set of expectations counter to our trust-building goals. Instead, being upfront about the system’s role during any assistance let’s us start crafting the relationship between users and their work tools. We need to purposefully design for a new kind of empathy with cognitive systems, generating expectations that build trust and long-term personal adoption.

Do:

  • Look at every interaction as an opportunity to set the right expectations, especially with the very first use.
  • Always attribute the software for any assistance given, like “Verse would like to suggest…” ​
  • Strip away any human identifiers in relation to the cognitive capabilities, like misleading pictures or names.
  • Refer to the system using gender neutral and collaborative language, like it, we, and us.
  • Push to redefine what it means to be an assistive system in existing channels of communication commonly associated with real humans, like chat, email, or status updates.

Do Not:

  • Refer to the system using gender specific pronouns, like he or she.
  • Create a human-like persona that would lead someone to believe the assistance is separate from the software.

02 /

The cognitive system should present itself as dynamic and teachable with the ability to make human-like errors.

Not present itself as a system that offers 100% reliability.

Why?

Interacting with cognitive systems means the system is learning by experience and improving over time. This could be met with some resistance since a majority of users are quickly dissatisfied with anything under 90% accuracy (the error-rate for our competitor’s artificial intelligence, like Siri and Cortana, is around 5%, down from 25% just a few years ago). This room for error diverges from the predictability we’ve come to expect from our work tools. Since the software’s predictability is ultimately about “behaving as expected,” it’s important for us to accurately inform our user’s expectations.

Do:

  • Set the expectation, from the very first use to every interaction after, that cognitive systems get better with usage and time.
  • Build-in inviting prompts where the user can give the system explicit feedback, where applicable. (See number 11 for more details.)
  • Design micro-interactions that communicates the computer’s learning, like a purposeful delay with a progress indicator after the user has actively told the system a preference, communicating to the user that it’s taking the information in and thinking.
  • Position the system’s assistance as a suggestion that is easily accepted or ignored

Do Not:

  • Feel obligated to always show the system’s confidence percentages to communicate room for error, be very thoughtful about the actual value that’s adding.

03 /

The cognitive system should improve over time, accurately reflecting my values through active and passive learning.

Not act unpredictably and inconsistent with my behavior.

Why?

While the behavior of the cognitive system is largely dependent on what happens in the development phase, designers need to look for opportunities to communicate a clear cause and effect on the front-end. We want a system that feels consistent with the user’s behavior without a lot of overhead, so the challenge is not only finding the best moments for active and passive learning but also conveying how the system is adapting as a result.

Do:

  • Explore opportunities where the user can “calibrate” the cognitive settings early on, like a light weight and conversational onboarding wizard.
  • When the user is opting into evidence for the system’s decision making, annotate which pieces of the algorithm are system-generated vs manually modified to help manage expectations.
  • Give the user a clear cause and effect after actively giving the system feedback, with the ability to undo.
  • Map everything the system is informed by back to a section of the user settings, where the user can opt-into more of a “peek behind the curtain.”

Do Not:

  • Front-load a user’s first experience with heavy-handed calibration that feels tedious and technical.
  • Encourage users to visit and tinker with the user settings, best-case-scenario the system will adapt over time without a lot of manual input from the user.

04 /

The cognitive system should support me in an unobtrusive way.

Not forcefully assert itself.

Why?

Always keep in mind what user’s are trying to accomplish, with a heightened sensitivity to current workflows and adopted behaviors. A cognitive design intervention should never go against the grain or conflict with the current way tasks are being completed. Instead, we want to leverage the goals we know our user’s have, the way they’re completing them today, and seamlessly integrate enhancements into the workflow as they know it. This is by no means a trivial task, but it is the key to discoverability and long-term use.

Do:

  • Location, location, location! Your design intervention should either be seamlessly integrated or secondary to the main task at hand.
  • Study the user’s natural behavioral patterns, so you can anticipate their time of need.
  • If you are not seamlessly integrating, look for vertical real estate below the main task on the page where pixels are less precious but still viewable in context.
  • Position the cognitive design intervention in a consistent location, but still easy to ignore.
  • Best case scenario, but much trickier: think about how you can add more meaning to the UI elements people are currently using, instead of introducing new elements that come with a learning curve.

Do Not:

  • Push down the main content of the page by privileging new cognitive elements, vertical real estate is precious and the assistance might end up being viewed out of context.
  • Interrupt the user’s main task with a design intervention that reminds the user of spam or bad advertising — like abusing the use of flashy animations, sound, color, or exaggerated scale.

05 /

The cognitive system should make me feel in control at all times.

Not take action on my behalf without my permission.

Why?

Since we’re introducing cognitive systems that can streamline your workflow, being sensitive to how much control the user feels is critical to personal adoption. Every element of the design needs to be considered from the user’s point of view: primarily language and built-in reassuring feedback loops. If, for example, a user isn’t sure what’s going to happen after they click a poorly worded button, they’re going to be less likely to click it — keeping the benefits of our time saving, differentiating technology hidden out of sight and unused! Transparency and clarity are critical to designing successful cognitive systems with barriers to entry so low, nobody will ever want the “old way” of doing things.

Do:

  • Pay careful attention to the language you use, making the system’s intentions unmistakably clear to the user.
  • Build-in re-assuring feedback loops after the user has taken an action, summarizing what happened
  • When applicable, surface the ability to undo.
  • When applicable, leave a paper trail, of sorts, of the action the user took in context, so it can be easily referenced or undone later.
  • When applicable, give evidence or allow the user to opt-into seeing evidence for the system’s decision making. (See number 10 for details.)

Do Not:

  • ​Automate any tasks without the final okay from the user.

06 /

The cognitive system should have a friendly personified tone*, making me feel calm and proactive.

Not be a cold, succinct computer that induces stress and anxiety.

* ok, maybe not always….I’ll explain below.

Why?

We want the cognitive system to have a personality, of sorts, that’s positive, affirming, and welcoming. Whenever you’re considering what a prompt from the system should say, ask yourself, “How would a human personal assistant talk in real life?” I would expect a human assistant to always make me feel like I’m on top of my day, clearly outlining my choices whenever a decision needs to be made or a conflict occurs. Our goal is to have users feeling happier and more efficient as a result of interacting with our cognitive system, so we want to omit language and symbols that could have the opposite effect (nobody is going to have their best workday with stress hormones unnecessarily coursing through their bloodstream). While our first choice is to make the system more human-like and conversational, always use your best judgement for when that may not be appropriate.

Do:

  • As a starting point, consider if a conversational tone is appropriate before defaulting to a more succinct and technical tone.
  • Be aware that there are still moments where being efficient and succinct is still a better experience, but (again) don’t just assume it’s always better.
  • Use language that a human assistant would use in real life. While this results in less efficient and longer text strings, the key part of the message should be emphasized for easy scanning.
  • Think about what your word choices connote and opt for words that make a user feel positive, proactive, and empowered.
  • At a scan, before the user even starts reading, your prompts should look friendly and inviting.

Do Not:

  • introduce imagery or language that may cause alarm, like unnecessary exclamation points or warning symbols.
  • choose words that make a user feel negative or behind on their work goals, like “URGENT!”

07 /

The cognitive system should help me take my logical next step, considering my context and history.

Not constantly bug me with unrelated, unhelpful suggestions.

Why?

Getting work done on a computer is actually an unnatural way of completing tasks, reliant on a user’s understanding of which buttons to click and in what order. By prioritizing natural language with our cognitive systems, we’re helping user’s complete tasks in a way that is much more natural to them, making the experience conversational with the sequence of button clicks left to the system. The trick is knowing what kind of tasks are optimal for streamlining and presenting the prompt in the right place, at the right time. Keep in mind: we’re not trying to change our user’s goals, we’re just trying to improve the impact and speed of completing them.

Do:

  • Be informed by what users are already doing in certain areas of the product and figure out how the system can do the heavy lifting for them.
  • Take what was initially a time-consuming series of button clicks and streamline that task behind a proactive, natural language prompt.
  • Always consider how system-generated suggestions map to the back-end. If it’s doable today through clicks of the mouse then it’s a great candidate for a virtual agent to expedite.
  • Make users the “human boss,” overseeing and approving the system’s automation.
  • Think about light-weight feedback mechanisms, so the user can actively let the system know their preferences (similar to Pandora’s thumb up, thumb down model).
  • Think about opportunities for the system to get smarter without active feedback (for example: a suggestion that gets ignored x amount of times gets a decrease in confidence and shows up less).

Do Not:

  • Start by fabricating new workflows or scenarios for the system to carry out, which do not currently map to the back-end.
  • Try to coerce user’s into tasks that do not reinforce or relate to the task at hand, we don’t want to distract people from accomplishing their goals with our cognitive integrations.

08 /

The cognitive system should elevate the most important, high-confidence suggestion.

Not overwhelm me with every choice at one level.

Why?

The biggest shift with cognitive computing is the fact that computer systems are now thinking for you — they use deep learning/neural networks instead of the very complex and static conventional programs we’re used to. As a result, users should be able to spend less mental energy thinking about the things the system is assisting them with, and focus their energy on a new kind of workflow. This new workflow involves our user’s being the human boss overseeing everything it’s tireless digital assistant is doing. Our goal is to make those moments, when users are reviewing the system’s suggestions, as productive as possible. How can we make sure our users look forward to input from the system? How can we make sure to introduce cognitive systems without exposing our user’s to a new realm of cognitive overload?

Do:

  • Regardless of how many suggestions the system has to offer, give it a clear voice in terms of what it thinks the user should do first, reducing the amount of decision making for the user.
  • Make it easy to access the other, lower-confidence, suggestions, like swiping a carousel on mobile and a carousel plus click to ‘see all’ on web.
  • Allow the user, at a scan, to see how many suggested actions they’re getting with a sense of progress as they go through them.
  • Ultimately, try to give the user as few suggestions as possible so they can develop positive associations with the system (like quick, meaningful, and assistive).

Do Not:

  • Display system-generated suggestions without a clear sense of hierarchy, forcing the user to scan everything before taking an action.
  • Overwhelm the user with tons of suggestions at a single touchpoint. Even if the interaction patterns above are appropriately applied, the system will feel less smart and assistive (like it’s throwing all of it’s ideas at you hoping something will stick).

09 /

The cognitive system should require equal effort to confirm, edit, or cancel.

Not feel like extra work to interact with.

Why?

Streamlining our user’s workflow is a huge benefit we want to bring with cognitive computing, but it’s a lot more than just reducing the amount of clicks it takes to do something. Familiarity of the user interface also plays a huge role in reducing the amount of energy users have to spend figuring something out.

Do:

  • Make the amount of effort required to use a suggested action a quantifiable improvement from the “normal” way of doing it.
  • If the user opts-into a suggestion, make it just as easy to opt back out.
  • Any decisions automated by the system are obvious and easily editable.
  • Try to reuse existing interaction patterns for editing the system’s defaults, giving the user a familiar experience during the task.
  • Example: If the system fills out invitees for a new event form, you can easily remove or add additional invitees the same way you would if you had gone out to your calendar and started a new event yourself.

Do Not:

  • Introduce a comparable or downgraded experience, requiring equal or more effort than the “normal” way of doing it.
  • Make up new interaction patterns for editing the system’s defaults if the way people are currently doing it (in other areas of the software) can be re-used.

10 /

The cognitive system should give the appropriate amount of supporting evidence.

Not overwhelm me with logic or keep me in the dark.

Why?

With the introduction of cognitive systems comes a new requirement: 2-way communication. Never before have we had to ask the system why it was doing something or expect it to explain itself. Until now, the system has largely been a presentation of all your button click options, it wasn’t actually doing any of the clicking. Cognitive computing assumes the system is starting to think for you, streamlining actions on your behalf. As a result, being able to quickly peek at it’s reasoning is pivotal to building trust and educating the user, informing accurate mental models so the software works as expected.

Do:

  • When appropriate, allow users to opt-into seeing the evidence instead of having it be a part of the default UI — evidence is usually looked for only when something isn’t working as expected, so otherwise it doesn’t need to be visible.
  • Position the evidence in the context of what it’s informing.
  • Show the user just enough evidence to understand the system’s decision making, be very critical here.
  • Make the evidence clear, succinct, and scannable. (See #6 for details.)
  • When applicable, invite users to “tune” the system’s algorithms directly against pieces of evidence they don’t agree with.
  • Be selective about this: if suggestions from the system are directly based on the user’s behavior, look for thoughtful opportunities to surface that evidence by default. This might be more appropriate as a light-weight natural language prompt that reinforces the system’s ability to learn and adapt.

Do Not:

  • Present evidence in a way that feels unapproachable and time-consuming at a scan.
  • Use jargon or technical language when describing the system’s algorithm.

11 /

The cognitive system should allow low-risk changes to the defaults towards my values.

Not make me feel like I can “mess something up.”

Why?

It is critical we push ourselves beyond the dry mechanisms for getting user input today, rethinking the experience users have teaching the system. How can it leave our users with a smile on their face and a sense of accomplishment? How can it be something they look forward to? With cognitive systems that adapt over time, making the avenue for user feedback as pleasurable as possible is critical to a system that users can live with and periodically mentor over time.

Do:

  • When appropriate, give the user an opportunity to correct the system’s defaults with a friendly, non-obtrusive prompt.
  • Facilitate low-risk tinkering by making it easy for the user to give input, see the impact, and change their mind if they want.
  • Make any teachable moments with the system feel light, playful, and modern.
  • Take advantage of the affordances that come with the platform you’re designing for to look for compelling feedback opportunities. For example, instead of selecting between 5 dots on a form to rate something, give the user a full-screen slider on their mobile device with real-time indications when they’re reached a new threshold — making the experience of rating something feel more immersive and modern.

Do Not:

  • Make teachable moments with the system feel labour intensive and technical.
  • Use design elements commonly associated with boring forms, spreadsheets, or surveys. We can do better. Our success depends on it.
Example of an AI design in collaboration with Stephanie Celedonia.
2016 • Example applying these concepts years later, in collaboration with Stephanie Celedonia.

✔ ️ Verdict: I think these have stood the test of time.

While I anticipated being a little embarrassed by some obvious misjudgments, I’m not! And I have a theory…

The common denominator driving this work was always the human experience—answering the question: “How can a system meet us on our terms and not the other way around?” While technology has been rapidly evolving and changing, humans have not. When success is measured in trust, our expectations are not fundamentally different.

Congratulations past-Laura, you’ve made me proud.

Thoughts?

--

--