top of page
Search
  • Writer's pictureMana Retreats

AI AUGMENTATION: THE REAL FUTURE OF ARTIFICIAL INTELLIGENCE

· By Kurt Cagle ·

While artificial intelligence continues to drive completely autonomous technologies, its real value comes in enhancing the capabilities of the people that use it.

I love Grammarly, the writing correction software from Grammarly, Inc. As a writer, it has proved invaluable to me time and time again, popping up quietly to say that I forgot a comma, got a bit too verbose on a sentence, or have used too many adverbs. I even sprung for the professional version.


Besides endorsing it, I bring Grammarly up for another reason. It is the face of augmentative AI. It is AI because it uses some very sophisticated (and likely recursive) algorithms to determine when grammar is being used improperly or even to provide recommendations for what may be a better way to phrase things. It is augmentative because, rather than completely replacing the need for a writer, it instead is intended to nudge the author in a particular direction, to give them a certain degree of editorial expertise so that they can publish with more confidence or reduce the workload on a copy editor.


This may sound like it eliminates the need for a copy editor, but even that’s not really the case. Truth is, many copy editors also use Grammarly, and prefer that their writers do so well, because they usually prefer the much more subtle task of improving well wrought prose, rather than the tedious and maddening task of correcting grammatical and spelling errors.


As a journalist I use Cisco’s Webex a great deal. Their most recent products have introduced something that I’ve found to be invaluable - the ability to transcribe audio in real time. Once again, this natural language processing (NLP) capability, long the holy grail of AI, is simply there. It has turned what was once a tedious day long operation into a comparatively short editing session (no NLP is 100% accurate), meaning that I can spend more time gathering the news than having to transcribe it.


Word Cloud with NLP related tags

These examples may seem to be a far cry from the popular vision of AI as a job stealer - from autonomous cars and trucks to systems that will eliminate creatives and decision makers - but they are actually pretty indicative of where Artificial Intelligence is going. I’ve written before about Adobe Photoshop’s Select Subject feature, which uses a fairly sophisticated AI to select that part of an image that looks like it’s the focus of the shot. This is an operation that can be done by hand, but it is slow, tedious and error prone. With it, Photoshop will select what I would have most of the time, and the rest can then be added relatively easily.


What’s evident from these examples is that this kind of augmentative AI can be used to do those parts of a task or operation that were high cost for very little value add otherwise. Grammarly doesn’t change my voice significantly as a writer. Auto-transcription takes a task that would likely take me several hours to do manually and reduces it to seconds so that I can focus on the content. Photoshop’s Select Subject eliminates the need for very painstaking selection of an image. It can be argued in all three cases, that this does eliminate the need for a human being to do these tasks, but let’s face it - these are tasks that nobody would prefer to do unless they really had no choice.


These kinds of instances do not flash “artificial intelligence” at first blush. When Microsoft Powerpoint suggests alternatives visualizations to the boring old bullet points slide, the effect is to change behavior by giving a nudge. The program is saying “This looks like a pyramid, or a timeline, or a set of bucket categorizations. Why don’t you use this kind of presentation?”


Over time, you’ll notice that certain presentations float to the top more often than others, because you tend to choose them more often, though occasionally, the AI mixes things up, because it “realizes” through analysing your history with the app that you may be going overboard with that particular layout and should try others for variety. Grammarly (and related services such as Textio) follow grammatical rules, but use these products for a while, and you’ll find that the systems begin making larger and more complex recommendations that begin to match your own writing style.


Close-up view on white conceptual keyboard - NLP (blue key)

You see this behavior increasingly in social media platforms, especially in longer form business messaging such as Linked-In where the recommendation engine will often provide recommended completion content that can be sentence length or longer. Yes, you are saving time, but the AI is also training you even as you train it, putting forth recommendations that sound more professional and that, by extension, teach you to prefer that form of rhetoric, to be more aware of certain grammatical constructions without necessarily knowing exactly what those constructions are.


It is this subtle interplay between human and machine agency that makes AI augmentation so noteworthy. Until comparatively recently, this capability didn’t exist in the same way. When people developed applications, they created capabilities - modules - that added functionality, but that functionality was generally bounded. Auto-saving a word processing document, for instance, is not AI; it is using a simple algorithm to determine when changes were made, then providing a save call after certain activity (such as typing) stops for a specific period of time.


However, work with an intelligent word processor long enough and several things will begin to configure to better accommodate your writing style. Word and grammatical recommendations will begin to reflect your specific usage. Soft grammatical “rules” will be suppressed if you continue to ignore them, the application making the reasonable assumption that you are deliberately ignoring them when pointed out.


Ironically, this can also mean that if someone else uses your particular “trained” word processing application, they will likely get frustrated because the recommendations being made do not fit with their writing style, not because they are programmed to follow a given standard, but because they have been trained to facilitate your style instead.



Training is the process of providing input data into machine learning in order to establish the parameters from subsequence categorization.

In effect, the use of augmented AI personalizes that AI - the AI becomes a friend and confidant, not just a tool. This isn’t some magical, mystical computer science property. Human beings are social creatures, and when we are lonely we tend to anthropomorphize even inanimate objects around us so that we have someone to talk to. Tom Hanks, in one of his best roles to date (Cast Away), made this obvious in his humanizing of a volleyball as Wilson, an example of what TVtropes.com calls “Companion Cubes”, named for a similar anthropomorphized object from the Portal game franchise.


Augmented AIs are examples of such companion cubes, ones that increasingly are capable of conversation and remembered history ( “Hey, Siri, do you remember that beach ball in that movie we watched about a cast away who talked to it?”, “I think the ball’s name was Wilson. Why do you ask?”)

Remembered history is actually a pretty good description for how most augmented AIs work. Typically, most AIs are trained to pick up anomalous behavior from a specific model, weighing both the type and weight of that anomaly and adjusting the model accordingly. In lexical analysis this includes the presence of new words or phrases and the absence of previously existing words or phrases (which are in turn kept in some form of index). A factory-reset AI will likely change fairly significantly as a user interacts with it, but over time, the model will more closely represent the optimal state for the user.


In some cases, the model itself is also somewhat self-aware, and will deliberately “mutate” the weightings based upon certain parameters to mix things up a bit. News filters, for instance, will normally gravitate towards a state where certain topics predominate (news about “artificial intelligence” or “sports balls” for instance, based upon a user’s selections), but every so often, a filter will pick up something that’s three or four hops away along a topic selection graph, in order to keep the filter from being too narrow.


This, of course, also highlights one of the biggest dangers of augmenting AIs. Such filters will create an intrinsic, self selected bias in the information that gets through. If your personal bias tends to favor a certain political ideology, you get more stories (or recommendations) that favor that bias, and fewer that counter it. This can create a bubble in which what you see reinforces what you believe, while counter examples just never get through the filters. Because this affect is invisible, it may not even be obvious that it is happening, but it is one reason why periodically any AI should nudge itself out of its calculated presets.



Just as a sound mixer can be used to adjust the input weights of various audio signals, so too does machine learning set the weights of various model parameters.

The other issue that besets augmented AIs is in the initial design of model. One of the best analogs to the way that most machine learning in particular works is to imagine a sound mixer with several dozen (or several thousand) dials that automatically adjusts themselves to determine the weights of various inputs. In an ideal world, each dial is hooked up to a variable that is independent of other variables (changing one variable doesn’t effect any other variable). In reality, it’s not unusual for some variables to be somewhat (or even heavily) correlated, which means that if one variable changes, it causes other variables to change automatically, though not necessarily in completely known ways.


For instance, age and political affiliation might not, at first glance be obviously correlated, but as it turns out, there are subtle (and not completely linear) correlations that do tend to show up when a large enough sample of the population is taken. In a purely linear model (the domain primarily of high school linear algebra) the variables usually are completely independent, but in real life, the coupling between variables can become chaotic and non-linear unpredictably, and one of the big challenges that data scientist face is determining whether the model in question is linear within the domain being considered.


Every AI has some kind of model that determines the variables (columns) that are adjusted as learning takes place. If there are too few variables, the model may not match that well. If there are too many, the curves being delineated may be too restrictive, and if specific variables are correlated in some manner, then small variations in input can explode and create noise in the signal. This means that few models are perfect (and the ones that are perfect are too simple to be useful), and sometimes the best you can do is to keep false positives and negatives below a certain threshold.


Deep learning AIs are similar, but they essentially have the ability to determine the variables (or axes) that are most orthogonal to one another. However, this comes at a significant cost - it may be far from obvious how to interpret those variables. This explainability problem is one of the most vexing facing the field of AI, because if you don’t know what a variable actually means, you can’t conclusively prove that the model actually works.



Sometimes the patterns that emerge in augmented AI are not the ones we think they are.

A conversation at an Artificial Intelligence Meetup in Seattle illustrated this problem graphically. In one particular deep analysis of various patients at a given hospital, a deep learning model emerged from analysis that seemed to perfectly predict from a person’s medical record if that patient had cancer. The analysts examining the (OCR-scanned) data were ecstatic, thinking they’d found a foolproof model for cancer detection, when one of the nurses working on the study pointed out to them that every cancer patient’s paper records had a © written on one corner of the form to let the nurses quickly see who had cancer and who didn’t. The AI had picked this up in the analysis, and not surprisingly it accurately predicated that if the © was in that corner, the patient was sure to have cancer. Once this factor was eliminated, the accuracy rate of the model dropped considerably. (Thanks to Reza Rassool, CTO of RealNetworks, for this particular story).


Augmentation is likely to be, for some time to come, the way that most people will directly interact with artificial intelligence systems. The effects will be subtle - steadily improving the quality of the digital products that people produce, reducing the number of errors that show up, and reducing the overall time to create intellectual works - art, writing, coding, and so forth. At the same time, they raise intriguing ethical questions, such as “if an AI is used to create new content, to what extent is that augmenting technology actually responsible for what’s created?”


It also raises serious questions about simulcra in the digital world. Daz Studio, a freemium 3D rendering and rigging software product, has recently included an upgrade that analyses portraits and generates 3D models and materials using facial recognition software. While the results are still (mostly) in the uncanny valley territory, such a tool makes it possible to create photographs and animations that can look surprisingly realistic and in many cases close enough to a person to be indistinguishable. If you think about actors, models, business people, political figures and others, you can see where these kinds of technologies can be used for political mischief.


This means that augmentation AI is also likely to be the next front of an ethical battleground, as laws, social conventions and ethics begin to catch up with the technology.


There is no question that artificial intelligence is rewriting the rules, for good and bad, and augmentation, the kind of AI that is here today and is becoming increasingly difficult to discern from human-directed software, is a proving ground for how the human/computer divide asserts itself. Pay attention to this space.


 

Kurt Cagle

Kurt Cagle is a writer, data scientist and futurist focused on the intersection of computer technologies and society. He is the founder of Semantical, LLC, a smart data company. He is currently developing a cloud-based knowledge base, to be publicly released in early 2020. Now seeking early investors and beta testers, please contact at kurt.cagle@gmail.com for more information.

 

35 views0 comments

コメント


bottom of page