February 20, 2021  

What is Artificial Intelligence?

Where is it Going and What Impact Will AI Have on Future Marketing Models?

Author Observations on Defining AI.

As I began to ponder the world of AI, I soon felt like this subject was too vast to just whip up a blog post without organizing a deeper dive into the world’s brightest minds. So many questions, myths and facts exist today about what AI is and is not; I felt compelled to blog up on this subject but needed a better approach. Although I possess a cursory knowledge on this academic topic following years of observation, grad studies, funding software companies, and technology writing, it just did not seem like enough. I felt compelled to put down the pen (OK keyboard) and just stop.

As a self-recovering Type-A personality who is used to researching complex topics, I wanted to offer more than just a summary post advocating how AI will change your life. Most people assume the evolving AI model will soon tell you what and how to do something better and will just make a company or person look smarter and make better choices. After all, why would that even be news, isn’t that what AI is supposed to do?

Back to the research lab on this one. I soon checked out every book I could find on AI at the local libraries, started a deeper web search campaign, took copious notes, and talked to more experts in multiple technology fields that revolve around AI. I also read everything “new” I could find on the subject. Caution here: In the world of AI, six months is outdated. I now have enough stored data and notes to write a 4 th book on why and how AI is in fact here to stay and what this science will and will not do over the next few years. Under full disclosure I mean writing my 4 th book not my 4th book on AI.

AI is Already creating Big Wins Make no mistake about the power of today’s artificial Intelligence. For example, JP Morgan Chase introduced a system for reviewing commercial loan contracts that use to take loan officers 360,000 hours to review. This is now done in mere seconds. Other “supervised” versions of AI can now diagnose skin cancer at accuracy rates at or exceeding human diagnosis capabilities. (Harvard Business Review Press, AI, 2019).

These “Cognitive” forms of AI operating under supervised environments are proving to be more accurate and can save thousands of human hours (not to mention lives) to produce vastly improved results when operating across large scale data environments crunching similar or recognized data samples. The news here is that this type of AI is now able to penetrate much larger amounts of data beyond 40M sample sizes to arrive at both quantitative and qualitative output decisions. Once more, machines do not require 401(k’s) or health plans or human benefits. They may however learn to demand faster processing and cooler operating environments.

The former version of AI (supervised machine learning or ML) is somewhat limited by the rules of engagement whereas the latter goal (unsupervised ML) is geared toward teaching a machine to “learn how to learn”. A key difference here is the first method generally follows the rules it is given, whereas an unsupervised logical AI platform is free to roam the Clouds in search of finding building blocks or critical data paths to improve solution sets and thereby actually learn “how to think”. No question as the world unleashes the second wave of AI and prepares for the third wave, there are plenty of challenges to overcome for a machine to better mimic the human brain. Alas, however, as John McCarthy, a noted math professor at Dartmouth University predicted in 1955, AI will be game changing as advancements in Machine Learning begin to create building blocks for higher learning further accelerating Machine knowledge.

Unfortunately for AI proponents looking to live in the Jetson Age, this took far longer than first envisioned, however AI is now very real and advancing much faster, in part because computer processing is also much faster. Those AI projects focusing on redundant tasking using a well conceived rules-based infrastructure profile are already seeing impressive ROI’s on their earlier AI investment. Most industry AI projections including all types of AI platforms are forecasting growth rates in excess of 33% over the next several years. Those firms heavily invested in managing Big Data like Google, Facebook, IBM, and Microsoft that have been making substantial AI investments over the past decade are now far more likely to lead the way into the next wave of AI advancements. In an article by Mark Knickrehm on how AI will change work environments, he refers to a recent survey by Accenture asking 1,200 C-Level Executives worldwide what are they doing with AI.

The response proves how CEO’s are committing to AI projects. “75% say they are currently accelerating investments in AI and other intelligent technologies and 72% say they are responding to a competitive imperative to keep up with rivals by improving productivity and by finding new sources of growth” (Knickrehm, 2019). We further note Grandview Research and their work on AI revenue which is forecast to surpass $733 Billion by 2027.

Projected Revenue from AI.png

Where does AI begin and what is the AI End Game? Mathematician’s understand that under proper guidance, companies have used proven multiple regression models for decades to plot financial relationships, optimize things like warranties and cost probabilities, and have used quantitative analysis modeling to stratify pricing options. Even simple equations like what combination of pairing pizza and chicken wings with a one-liter bottle of soda will make the most money have used mathematical modeling for many decades with great success.

While these methods may seem like AI to some and these proven techniques can produce impressive results, most AI Engineers know there can be a distinct line in the sand between what a well-conceived AI platform can do with years of machine training and what a simple algorithm or Linear Regression model can produce given enough relevant data input. What then is the difference between quantitative analysis and AI? Let’s look at how the Webster Dictionary defines Artificial Intelligence.

Definition of artificial intelligence 1: a branch of computer science dealing with the simulation of intelligent behavior in computers 2: the capability of a machine to imitate intelligent human behavior (Webster.com, 2021).

AI purists would not likely concede these notions to define AI. This is because AI as of yet cannot really “simulate” intelligent behavior if this is comparing intelligence to the human brain. Second, AI can imitate human behavior in some ways (cognitive recognition) and do it far better than non-AI solutions sets but is a long way off from doing so in other ways like implementing impulse decisions or incorporating unconventional thinking.

Why would a coach paid millions of dollars a year elect to go for a first down on their own 35- yard line on 4th and eight in the first half? There is no simulation pattern under a likely “trained” football AI routine that would favor this type of decision under conventional football wisdom, yet coaches do make out of the box decisions on occasion. Machines don’t.

Moreover, as unsupervised AI Machines are being trained to “learn up”, how does one train a machine to go for it on 4th and long when odds are stacked against success? It is not logical Spock. The definition from Wikipedia helps define how the human brain thinks versus how Machine Learning is taught to think.

Wiki_AI.png

The Wiki definition of AI does a good job of breaking down why one AI platform can be very different from another, i.e., rules-based AI versus letting AI make it own rules. The take away here is that AI strategies, and therefore designing downstream strategies for deployment, must define the type of thinking process to be used. Next gen AI designs will also have to choose between human or machine hierarchies. Further, the integration of existing platforms must decide how an increasing array of disparate data banks are going to interface with various AI metrics and support technologies like GEO fencing or pulling data from deep neural networks. In ML speak, this means architecting which ML environments can overrule other machine environments when conflicts occur.

This distinction in AI architectures is so pervasive it is quite possible that two identical company’s embarking on a similar AI goal, could end up with an AI deployment that viewed from an AI structural perspective, could end up as polar opposite versions of AI, (supervised versus unsupervised), yet each version could be designed to achieve the same end goal. In other words what does a Ford 150 truck have to do with a Porsche 930 other than they both run on four wheels. Each one can also get you from Point A to Point B, but they operate under very different design specs.

The AI debate on machines running unchecked, despite the challenges associated with decoding human emotion, is inching closer to an AI reality where a machine can start to mimic “human consciousness”, a fear echoed by some of the world’s top AI architects.

One of the world’s greatest AI minds, the legendary Douglas Hofstadter wrote a Pulitzer Prize winning book in the 1970’s (Gödel, Escher, Bach: An Eternal Golden Braid) about AI and merging the human element with a machine. He was also an early influencer behind the Google AI Brain. Hofstadter years later noted in his observations on one of the more advanced AI achievements involving human emotion his fear about this line getting crossed with human consciousness becoming embedded into machines. He discussed an AI application called “EMI”, for Experiments in Musical Intelligence. This AI wonder application was tasked with creating classical music “in the style” of Bach and Chopin.

While giving a speech at the prestigious Eastman School of Music in Rochester, New York, Hofstadter first played a rare mazurka composed by Chopin in front of the top students including music theory and composition faculty. As noted by Melanie Mitchell, a disciple of Hofstadter, in her work in “Artificial Intelligence” noting this event in her book, here is what happened when one musical expert heard the second version created by the EMI AI Machine;

“The second was clearly the genuine Chopin, with a lyrical melody; large-scale, graceful chromatic modulations; and a natural, balanced form”. (Mitchell, 2019). This sentiment was echoed throughout the audience. To everyone’s surprise the machine had outperformed the great works of Chopin by creating a world class rendition of one of the greatest composers of all time. Hofstadter went on to make the point that AI advancements in human emotion that can incorporate human feelings could catch us off guard as a society and leave human intelligence “in the dust” leaving human beings as “relics” in this new world order. (Mitchell, 2019).

This fear is genuine and raises several questions about where AI should go. Even more persuasive is the fact that these human/machine hybrids are now accelerating development at a much faster rate because of enhanced capabilities of deeper learning platforms. After all, if machines can replace human triumph, creativity and emotion and do it handily, have we not just compromised our souls? What becomes the value of original work by humans if our work can be enhanced several fold by a machine in a second?

As the world embarks on implementing more advanced versions of AI, the first order of business is to understand what is AI made of and how does it work. Let’s first start with what feeds AI. The answer is data, and lots of it. Indeed, we have now reached the age of “The Zettabyte Era”. A Zettabyte is equal to a trillion gigabytes. Expressed as a number that is 1000 to the seventh power or is represented by the number 1,000,000,000,000,000,000,000 bytes. When one can visualize this number and the vast supply of data that now exist, it makes sense why ML technologies hold so much promise.

Even more amazing is this number according to Cisco, will grow to over 150 Zettabytes by 2025. The advancement of supercomputers was a key component to making newer AI applications work because of the computer power needed to discover new drugs, conduct cancer research, architect DNA mapping, and numerous other applications. AI can optimize the analysis of Big Data to scan hundreds of millions of possibilities in search of finding new discoveries. We are now able to perform complex computer tasks in days that previously would have taken decades to research all of the possible mathematical outcomes.

This brings up the technology convergence discussion. Modern day AI is like an air traffic control tower. It has to manage multiple technology disciplines in the proper order to get all of the data it has accessed into and out of the AI tasking process and then solve for X and do it cheaper than the existing methodology in order to justify AI transition costs. In addition, these faster AI Engines must do this in a timely manner and without crashing.

It takes resources to set up and deploy deeper learning capabilities (neural nets), a key component to trying to unlock the human brain by accessing larger and larger amounts of data and then auto-training AI Engines to build on previous knowledge bases that become increasingly more intelligent with each successive iteration. One large Swiss bank, (SEB) is taking the 30% of customer service calls not solved by Aida (their Voice assistant program) and using their resolution techniques and solutions to re-train into Aida, such that their VA can use that knowledge the next time it encounters a like kind issue, thus further freeing up their human trained personnel.

While the heartbeat of an advanced AI implementation ultimately rests within a software engine, (a combination of algorithms and programs residing behind a User Interface) it is the integration of multiple input libraries like neural nets, Big Data mining, data compression and image optimization that must work together to intelligently process disparate data into a credible summary report. In part, it is the combination of multiple technologies working together that sets AI apart from just being a quantitative model.

These data results may be further combined with deeper and deeper learning environments while integrating with AI support technologies like sensory observation or GEO tracking that can define today’s more advanced AI experiments, a small percentage of which get deployed into meaningful applications. Finally, these attributes can combine with in-house proprietary technologies inherent in a company’s product line to gain a real competitive advantage. This can be expensive however.

Today, only the largest tech companies have made significant investments in technology infrastructure needed to power the largest AI projects. The alternative for some companies is to lease large scale computing power. As an example, using the best available P3 environment on Amazon for 1,000 days now cost $31.28 per hour or $31.28 X 24 hour for 1,000 days equal to $749,232 dollars to complete a large neural net operation. (Przemek, 2019).

Think of what technologies are now used to optimize self-driving cars for instance. How about a plane that can fly you to New York without a pilot? Now we are talking very significant technology integration, a key challenge to next-gen AI.

The noted AI expert, Joaquin Candela, Head of Facebook’s Applied Machine Learning Group (AML) at Facebook makes a good point; “Figure out the impact on the business first and know what you are solving for”. (Candela, 2019).

In essence Candela makes the point that one does not invent an AI machine and then figure out how to use it but rather should use the opposite approach. First, understand what is the nature of the business goal and can an AI strategy be adopted or modified to outperform what is being used today.

While many companies are looking to build AI strategies around mission critical apps, each AI review must start with how can existing algorithms be improved to increase speed and produce better end results for the consumer or business enterprise without using AI. In essence, has the application maxed out its optimization capability such that it requires AI to garner real improvements? It is only then that one can evaluate whether an AI strategy can be incorporated into the solution with a design plan to incorporate an AI methodology that can survive ROI hurdle objectives. Like any other technology upgrade, at some point AI has to pay for itself and provide a return on the investment.

To AI or Not to AI? Now that we can see some of the challenges of not only teaching a machine to learn but making the commitment to build an AI team, it is understandable why some companies are taking a wait and see approach. ML is advancing but it must identify to what standard and moreover, what type of human bias may be imbedded into a machine’s ability to choose and at what point will machines begin to question their own instruction sets? This will further complicate traditional paths toward technology upgrades ultimately forcing a discussion on who will make IT investment decisions going forward, man or machine?

The reality here is that those firms who got pro-active years ago with designing and implementing a robust AI strategy are already seeing big wins with some AI deployments although these are for the most part limited to supervised AI architectures. These firms however, are also significantly more likely to see bigger AI wins in the next 36 months given their investments in creating baseline platforms to leverage targeted applications. Examples include Google Brain, Watson AI (IBM) and Facebooks CLUE defined as Conversational Learning Understanding Engine.

These significant AI achievements will allow the very best AI companies to deploy AI solutions much quicker by having an AI Engine that can be reused and modified to deploy more and more AI solutions incorporating very task specific mandates. A key advantage to these “In-House” AI platforms is they were designed from the ground up to better integrate with existing IT systems that must communicate with AI architectures. This AI learning curve can easily contain a lot of technology minefields for firms not well versed in AI integration issues.

The challenges of getting large departments use to re-thinking how business can implement AI takes time. Those companies that have not started to deploy an AI strategy (which can easily take years to implement) will need to devote more resources to AI to catch up.

This is why some firms will be cursed with AI failures if they jump into too quick without a defined AI strategy and others will simply lose because they chose in the short run “not to fail” by avoiding AI implementations. AI, in the end however, is going to change the game of business and yet big successes with AI tend to come with big AI failures. Cadela, Facebook’s AI pioneer gets it, he bakes in a 50% failure rate on thousands of AI experiments he is overseeing.

Big Data and AI?

Does One (ML) Machine Environment wash the Other? Recent gains in computing power have set the table to begin to uncrack AI challenges that will need to crunch tens and tens of millions of data strings to deliver on the very latest trends in AI. This faster computer power is the fuel that runs deeper learning environments that will enable AI architectures to better attack mathematical and image challenges previously outside the window of what was possible. This will also accelerate the ability to track and define complex pattern recognition engines that are key to advanced ML learning processes.

AlexNet to AlphaGo Zero.png

Significantly faster processing is particularly important in order to invent and improve unsupervised AI learning environments where machines can actually learn to think at much higher levels that are closer aligned to the human brain. This will need to factor in human emotion where the goal set is to better recognize the human element. Currently, most supervised AI loops are not designed to make out of the box decisions for the most part or do so with post result human reviews.

As noted above, computing power has seen a 300,000 X increase in capabilities from 2013 to 2019 because of advancements in chip technologies. This will help drive next-gen AI to develop new solutions in broader AI platforms needing this power to maximize deeper neural net advances. We will know we have reached another major AI breakthrough when Machines will recommend solutions that are unconventional and even defy statistical data, with ML learning how to argue that the rules don’t apply in this or that situation. This will be the functional equivalent of an academy award for the Best Machine Actor Award for thinking algorithmically outside the box.

Remember that 4th and long on your own 35-yard line in the second quarter when the game is tied? If the machine just said to keep the punter on the sideline, what type of AI algorithm was needed to override the statistical data that strongly suggest the kicker punt the ball? Hint: AI could decide that the element of surprise in a very specific situation may justify taking the risk of going for a first down on 4th and long. An example would be the punter being told that the defense has shifted to a full blitz package to block the punt and therefore the odds of a receiver being open have increased. Either way, a decision on which rule supersedes punting versus risk taking, must allow ML to fail, otherwise, the human gut instinct will not be factored into the next iteration of ML decision trees.

Lastly, for AI to continue its projected meteoric rise, a major convergence of smarter technologies will need to continue. This will involve using more advanced Neural networks with smarter decision trees, larger scale data banks with more advanced search capabilities, faster processing, and upgraded algorithms that can scan up to tens of millions of data points in seconds or less. The end game then becomes conversion of these solutions sets into credible summary outputs or actionable steps. Integrating these somewhat disparate technology platforms is what will be needed for real convergence to occur with next-generational AI deployments.

Improving AI solution sets will also accelerate due to the improvement of neural nets and faster chip processing. Current AI models are already becoming increasingly relevant and they will continue to improve with more deployments achieving credible ROI’s. Al is no longer just a trendy PR strategy, but can actually make money by lowering cost across a broad array of potential solution sets. For this reason, narrow offerings targeting specific tasks or narrower process objectives will continue to garner the early AI victories because the pathway to measurable ROI savings will be more apparent to costing managers. Early AI products will still rely on human interpretation as an approval process for AI implementation decisions, at least for the foreseeable future.

Is an AI Commitment an Investment Paradox?

I feel compelled here to point out that many CEO’s with revenues in the $100-$500M range operating with reduced AI budgets could be in a no-win situation when considering a significant AI commitment. The first step is to recognize that all companies are already in the technology business regardless of what you sell. AI Technology is what will define all growth models going forward. This ship has already sailed. Firms can either adapt to this generational change in business models or eventually succumb to a buy out or merger from a company with higher AI enhanced margins. Those companies operating without AI technologies will sell for lower multiples, reflecting lower Net Present Values on future cash flows versus AI equipped marker leaders.

Companies with revenues below $100M will have access to outsourced “AI Services”, however the value prop for many of these offerings will vary significantly and only a small percentage of this “Packaged AI Market” will have real cutting-edge AI value. These so called “AI Marketing Companies” that are packaging or leasing AI from other companies may perform well below those companies that are close to the source code and pioneered the first AI releases. Know thy code and know where it came from. If your provider owns the source code or has AI patents, you are much closer to the real thing.

Where is AI headed? -Summary Conclusions.

The great futurist Ray Kurzweil has penned several books about the future of AI and its implications on human kind. The majority of his predictions have come true or are in the process of becoming true. In his book, “The Age of Spiritual Machines”, Kurzweil predicts by 2029, a $1,000 computer will be 1,000 times smarter than a human being. He also foresees by 2029, the line between what the definition of a human being is versus a “conscious machine” will start to blur, such that a movement will arise creating an entire area of law, seeking to grant “robot rights” to machines. Remember that demand for better cooling? These distinctions between human and machine intelligence will be further fueled by many people getting neural implants that interact with the human brain, thus becoming part human and part machine. Machines on the other hand will become more human like with built in emotional capabilities, thus becoming more human. The result will be a bigger controversy on what defines the line between machines and the human brain.

This AI paradox for companies to adapt AI will continue into the foreseeable future. On the one hand, if companies fail to envision AI in their future, AI savvy competitors will take market share. On the other hand, implementing an ill-conceived AI strategy that does not produce measurable results over a defined payback objective could end up falling further behind in operational performance due to the cost savings that are now present with AI models. These AI efficiencies will continue to accelerate corporate earnings and will continue to mount when newer AI technologies convert into the production and distribution process.

That said, it is imperative for smaller companies contemplating initial AI experiments to start with a narrow focus designed to produce a small incremental win so boards feel comfortable committing larger resources into downstream AI deployments.

This strategy can help build confidence that the AI program has the right team in place and is making the right outsourcing decisions for AI. Those companies that can embrace the new world order with AI will grow profits faster, drive share prices higher, and create more opportunities for all of the stakeholders embedded in their ecosystem.

Author Note: I want to acknowledge the tremendous contributions that my references have made toward AI advancements. Their knowledge and wisdom have been very helpful with my on-going AI research efforts. I will post up Part II of this Blog later in the Spring which will focus on what to look out for when implementing AI into your Marketing Plan. My next Blog will be titled: “Can Deal Structure Trump Valuation” and will be released in March.

Signing Off,

John C Botdorf, MBA
Chairman
botdorfjohn@gmail.com
303.810.7710

Author:
“Mastering Your Time Share”
“Mastering Your Company”
“Mastering Your Diva”




References.

  • Brynjolfsson, Erik, McAfee Andrew (2019). “Harvard Business Review.” Artificial Intelligence, Chapter One, “The Business of Artificial Intelligence.” Chapter One pp.14-15.

  • Grand View Research, July, 2020. www.grandviewresearch.com/industry-analysis/artificialintelligence-ai-market.

  • Knickrehm, Mark (2019). “Harvard Business Review.” Artificial Intelligence: “How will AI change work.”
    Accenture Survey of 1,200 C-Level Executives. Chapter Eight. Pp. 99-100.

  • Kurzweil, Ray (1999). “The Age of Spiritual Machines.” Predictions made by Ray Kurzweil - Wikipedia

  • Merriam-Webster.com Dictionary. “Artificial Intelligence,” Merriam-Webster.
    https://www.merriam-webster.com/dictionary/artificial%20intelligence. Accessed 5 Feb. 2021.

  • Mitchell, Melanie (2019). “Artificial Intelligence, A Guide for Thinking Humans.
    Farrar, Strauss and Giroux. New York NY 10271.

  • Przemek, Chojecki (2019, Jan 31). “How Big is Big Data?”

  • Webster.com (2021) Definition of Artificial Intelligence.

  • Wikipedia, 2020. “Artificial Intelligence.