Bill Higgins is Director of Watson Research and Development at IBM. He leads the integrated research and development team responsible for evolving the foundational IBM AI technologies powering their main products and systems. He’s especially interested in the intersection of culture in tech as it relates to increasing diverse representation and technical leadership. Bill joins Barry O’Reilly in this week’s show to discuss what is needed to innovate at scale with developing AI and how teamwork makes the AI work.

Bill’s Start

Bill shares how he started in DevOps and made his way to becoming a leader in software engineering. He had always worked on software products, but after a certain point, he became disenchanted with the process of building the products, especially as it related to the methods and tools involved. He thought it could be much better than it was. In the early 2010’s, he became enamored with the DevOps movement and sought to drive a DevOps culture at IBM; he was quite successful at this. His team was one of the first to be sent to the IBM design camp for product teams. He describes the experience and how it impacted his career. [Listen from 1:50]



Deterministic vs. Probabilistic

Barry recalls how Bill shaped his perception of AI. “I still remember… being blown away by the clarity of how [Bill’s colleagues] could talk about it. I got smarter just listening to them, and so many of the notions I had of what AI could do were blown away very quickly,” Barry says. He asks Bill to disclose the assumptions and knowledge he had to unlearn from his traditional engineering mindset. Bill responds that he realized that the field of AI is a very different paradigm from traditional programming; the latter is largely about methodically defining a set of rules to create a deterministic program. AI is the opposite, by contrast; using the example of machine learning, Bill describes how AI is probabilistic by nature. “We have a product that IBM called Watson Assistant, which is a toolkit for creating chatbots for customer care services,” he shares. “When you’re training the chatbot, you provide it with examples of utterances you’d expect a person to say for a particular scenario… and give it test utterances to see if it responds correctly.” He talks about AI and the probabilistic approach. [Listen from 8:00]

(Listen to the previous show, Stepping into the Metaverse with Aaron Frank)

AI Through the Years

Bill and Barry explore the history and development of AI, and IBM’s role in both. “There was this really famous conference at Dartmouth University in 1956 with some of the legends of the industry… that established AI as a field of study. They adopted the term artificial intelligence as opposed to one of the competing terms like cybernetics,” Bill remarks. “IBM actually helped schedule and broker that conference so we were literally there before the beginning.” Throughout the 60’s and 70’s, the fundamental parts of modern AI technology – neural networks, the concept of machine learning, natural language and speech processing – broke through, but the industry would still be considered a field of research not fit for real enterprise use up until circa 2011. During that specific time, the butterfly effect of the explosion of data from mobile devices began to take hold – cloud storage began to be implemented, neural networks advanced to the point of deep learning, and things like Siri and Watson Jeopardy were created. This opened people’s eyes to the possibility of AI’s practical uses. [Listen from 17:25]

(Listen to Designing DAOs with Ja-Naé Duane)

Teamwork Makes the AI Work

To achieve something great with AI, you must have equally great AI algorithms made by people waist-deep in machine learning, Bill explains. They must understand the whole lifecycle of machine learning, make their algorithms available via understandable developer APIs, and run it at an internet scale. One of the biggest mistakes companies make is primarily investing millions of dollars in hiring scholars with degrees in machine learning from reputable institutions. You need both machine learning people to create the algorithm, but you also need the software developers to create the APIs and internet scale architectures. Bill likens this teamwork to building a soccer team – if you want to compete in, say, the European League, you wouldn’t go out and hire the 11 best goalies in the world. “To do anything good or anything excellent, you need to get different disciplines not just coordinating together, but actually collaborating,” he advises. “That requires them to have a bit of a T-shaped skill set; where they are deep in something like engineering, but they know just enough about design to understand how the designers add value, and when they should proactively reach out to them rather than just showing up to a couple of status meetings every week and coordinating.” [Listen from 26:40]

Building Great AI

Innovators face two hard problems when creating foundational AI components, Bill tells Barry. “The first one is that fusion, that synthesis of really excellent machine learning, algorithm creation and excellent software development for both creating the APIs but also creating the internet scale architectures… Number two is how do you create an innovation pipeline.” IBM’s experience has been that innovation is difficult to commercialize quickly and at scale. They found that a modular architecture helps them to add new components more readily. Extensibility is another key principle. “So if you have a really good extensible system then you can basically allow them to run their new thing – that might not be high quality, may not be secure – as an extension to the system in the test environment and therefore just move really quickly,” Bill comments. He and Barry agree that good collaboration and composability are two additional major aspects of a good innovation pipeline. [Listen from 30:40]