Read attachment on the Grok: Action Intelligence for Fast Data. Write a 5 page analyze paper use the following below:
List in bullet format:
Discuss and describe the business strategy currently being used by the company (minimum of 1 page)
Provide a total of four findings of fact; 1 from the following four functional areas of business:
=Finance or Accounting
Provide a full justification and recommendation for each finding of fact (minimum of 1 page each)
Grok: Action Intelligence for Fast Data
What then is intelligence such that brains have it but computers don’t?
Why can a six-year-old hop gracefully from rock to rock in a streambed while the most advanced robots of our time are lumbering zombies?…
Why can you tell a cat from a dog in a fraction of a second while a supercomputer cannot make that distinction at all?
—Jeff Hawkins, On Intelligence
A SMALL COMPUTER sits on top of a soldier’s backpack. It alerts him when it detects anything suspicious, playing the role of night-vision goggles combined with radar, and providing 360-degree long-range coverage. So far, this scenario is possible only in video games like Call of Duty or Halo 3. But with Grok, a cutting-edge, pattern-recognition software developed by Jeff Hawkins and associates, such equipment may soon become reality.
Grok is the commercial culmination of Jeff Hawkins’ lifelong quest to understand how the brain works. A successful serial entrepreneur, Hawkins founded Palm Computing in 1992, followed by Handspring in 1998. He is credited with bringing to market devices like the Palm Pilot, Treo smartphone, and Palm Pre. These significant technological breakthroughs paved the way for today’s ubiquitous iPhones and BlackBerrys. Next, Hawkins continued his pursuit of “smart” computing by founding the Redwood Neuroscience Institute (RNI) in 2002. RNI is a nonprofit organization devoted to developing “biologically accurate and mathematically well-founded models of memory and cognition.”2
In March 2005, Hawkins partnered with Dileep George and Donna Dubinsky to form Numenta (renamed Grok in May 2013),3 a startup company whose objective is to maximize the impact of Hawkins’ hierarchical temporal memory (HTM) technology.4 Grok is the commercial name for the intel-ligent software platform based on Hawkins’ HTM theory, which he published, with Sandra Blakeslee, in his 2004 book On Intelligence. What makes Grok unique is that it processes information in the same way as the neocortex, the area of the human brain responsible for sensory perception, motor control, spatial reasoning, conscious thought, and language.5 Hawkins believes that Grok can be used to create “thinking machines” with a wide array of applications in fields as diverse as security and monitoring, energy management systems, digital pathology, and the detection of financial fraud.
Given Hawkins’ legendary reputation, many people in Silicon Valley speculate that Grok will be hugely successful. Many even wonder if Grok will become the standard for intelligent computing, much like how Microsoft became the industry standard for PC operating systems. But whether Jeff Hawkins will be able to successfully commercialize this new technology is an important question. To do so, he will need to convince multiple new clients to take a risk with his ground-breaking soft-ware platform, luring them away from competing big data solutions offered by more established (and well-funded) competitors like IBM. Managing such early growth can be tricky: Grok needs enough new clients to maintain a steady revenue stream for product development, yet not so many that the company’s personnel are overwhelmed and unable to respond to clients’ service needs in an effective and timely manner. While excited to have reached the commercialization stage and finally introduce his “baby” to the computing world, Jeff Hawkins knows that careful strategic planning and execution is as important as ever to Grok’s long-term success.
Hawkins’ Quest to Discover How the Brain Works
Hawkins’ interest in the brain started when he was a teenager. When young Jeff wanted to under-stand something, he would go to the library and find a book that explained it. He could almost always find at least some information on whatever topic interested him. Surprisingly, he found no theories, good or bad, on how the brain works. It bothered him that people had no idea how this master organ functioned.6
CHILDHOOD DREAM REKINDLED
In September 1979, three months after he graduated from Cornell University with a degree in elec-trical engineering, Jeff Hawkins’ interest in brain theory was rekindled. Working as a junior engineer at Intel (which was then merely 11 years old), he came across a newly published issue of Scientific American dedicated entirely to the brain. To many aspiring neuroscientists, including Hawkins, it was “one of the best Scientific American issues of all time.”7 Hawkins thought the most intriguing article was the final one, “Thinking about the Brain,” written by Francis Crick who, with James D. Watson, dis-covered the double-helix structure of DNA. Hawkins later described this eureka moment as follows:
Crick argued that in spite of a steady accumulation of detailed knowledge about the brain, how the brain worked was still a profound mystery. Scientists usually don’t write about what they don’t know, but Crick didn’t care. He was like the boy pointing to the emperor with no clothes. According to Crick, neurosci-ence was a lot of data without a theory. His exact words were, “What is conspicuously lacking is a broad framework of ideas.” To me this was the British gentleman’s way of saying, “We don’t have a clue how this thing works.” It was true then, and it is still true today.8
When he finished reading, Hawkins put down the magazine and thought to himself, “I have to work on this.” He believed that once he understood how the brain worked, he would be able to build more-intelligent machines. Hawkins decided to pitch his idea to his current employer, the company that invented memory chips and microprocessors. He wrote a letter to Gordon Moore, Intel’s co-founder, asking permission to start a research group—even if Hawkins was the only member—that would focus on understanding how the brain works.9
Moore referred Hawkins to Intel’s chief scientist, Ted Hoff, who was an avid researcher in neural network theory and had a long history working with artificial neurons. After listening to Hawkins’ proposal, Hoff responded, “No, I know all about brains, you are never going to succeed.”10 Hoff simply did not believe anyone could work out brain theory “in the foreseeable future.”11 While Hoff’s assessment that brain theory was nowhere near advanced enough to be useful for commercial application was correct, the young and eager Hawkins was still disappointed.
Not one to give up easily, Hawkins decided to pursue his passion by enrolling in a doctoral pro-gram at MIT, a hotbed for artificial intelligence (AI) research. According to John McCarthy, who coined the term in 1956, artificial intelligence “is the science and engineering of making intelligent machines, especially intelligent computer programs.”12 A machine is considered intelligent when it can “success-fully pretend to be human to a knowledgeable observer.”13 Hawkins was hoping to find someone who would be interested in brain theory. When he went to MIT to interview, however, he was surprised to learn that the MIT computer scientists believed brains were just messy computers and there was nothing to be learned by studying them. Hawkins’ application to MIT’s doctoral program in computer science was rejected.
Jeff Hawkins continued to work at Intel, where he taught courses on microprocessor design and trained field applications engineers. He finally left Intel for GRiD Systems in 1982, looking for an atmosphere that offered more room for growth and influence. GRiD, which was founded in 1979 and went public in 1981, was exactly such a place.14 In 1982, GRiD scientists invented the first clamshell laptop computer, the GRiD Compass. One of the first things Hawkins created was GRiDTask, a high-level programming language optimized for creating applications on GRiD’s laptops. “GRiDTask and I became more and more important to GRiD’s success,” said Hawkins.
BIOPHYSICS STUDENT AT UC-BERKELEY
Although Hawkins’ career was going well, his desire to pursue brain theory persisted. Thus, after seven years of working in the computing industry, Hawkins enrolled in the biophysics doctoral program at the University of California, Berkeley, in January 1986. He explained, “Well, if I cannot do it in the computer science world, I figure I will do it in the neuroscience world.”17 His creative value was so rare that GRiD offered him a leave of absence with the option to return at any time with the same salary, position, and even stock options.
Hawkins was excited to be a graduate student, immersing himself in hundreds of papers by anato-mists, physiologists, philosophers, linguists, computer scientists, and psychologists. A year into his doctoral studies, he wrote his PhD thesis proposal on a theoretical mathematical approach to under-standing the neocortex. However, because no one on the faculty would sponsor his research, Hawkins’ thesis proposal was rejected. While at Berkeley, Hawkins also wrote a pattern-classifier program based on his theory of auto-associative memories, which he later patented and developed into a handprinted-character recognizer. This was the predecessor to the handwriting recognition software Grafitti, which runs on today’s Palm operating system.
After the second failed attempt to study brain theory, Hawkins did some major soul searching. He decided to return to GRiD and do four things:
One thing I decided to do was that I am going to mature and learn how to make institutional change . . . how to influence people, how to make people change ideas and so on. . . . I am going to work on that. The second thing was that I have to make a name for myself. If people respect me, they will listen more to me. . . . The third thing I want to do is to make some money. I need to raise my family. . . . I may need to fund my research down the road . . . The last thing, let the neuroscience mature.1
Back at GRiD, Hawkins continued to work on product design. Although GRiD gave him a VP title, he had no direct reports. GRiD told him just to do whatever he wanted as long as he created new ideas that were commercially viable. And so he created GRiDPAD, the first tablet laptop computer.19
Jeff Hawkins: Serial Entrepreneur
While working at GRiD, Hawkins realized that a handheld computer that serves the consumer market could be a huge business opportunity. Despite the failure of Apple’s Newton, Hawkins strongly believed that in the future everyone was going to have a personal computer that fit into their pocket. He thought to himself, “The personal (mobile) computer will become the primary computing device for everybody. This is inevitable. But I am going to make it happen sooner!”20 However, his idea seemed too risky to GRiD because of the company’s focus on the GRiDPAD and the vertical application of laptops. As a consequence, Hawkins left GRiD and started his first new venture, Palm Computing.
PALM COMPUTING AND PALM PILOT
Hawkins recruited Donna Dubinsky, a Harvard MBA who had previously worked for Apple, to run Palm’s day-to-day business, which freed him to focus solely on new-product development. Yet despite their combined expertise, Palm Computing was not an immediate success. After several initial product failures, Hawkins went back to the drawing board and came up with a completely different design, something small and simple. After talking to customers who had bought the Newton, Hawkins realized that the initial mobile devices needed to compete with paper-based planning systems and not full-blown personal computers. He also learned that Palm Computing would have to build every-thing—the hardware, the handwriting recognition software, the operating system, synchronization—and integrate the pieces.
But there was a major obstacle to commercializing the new “Palm Pilot”: The company had only $3 million left, which was not enough to bring the product to market. Therefore, Hawkins sold the com-pany for $44 million to U.S. Robotics in 1995.21 The entire Palm team stayed on because they all felt passionate about the future of mobile computing. The Palm Pilot was brought to market in early 1996 and turned out to be a huge success.
In 1997, 3Com acquired U.S. Robotics. Dissatisfied with the direction that Palm was going under 3Com’s management, Jeff Hawkins and Donna Dubinsky requested that 3Com spin out Palm as an independent company. When 3Com refused to do so, Hawkins and Dubinsky left Palm in the summer of 1998. Hawkins believes that “if they [3Com] had spun us out, I’d still be there.”22 (3Com eventually did spin out Palm in the summer of 2000. In the spring of 2010, HP acquired 3Com for $2.7 billion and Palm for $1.2 billion.)
HANDSPRING AND TREO
Hawkins and Dubinsky almost immediately formed a new company, Hand-spring, with the intent of advancing handheld computing technology to a higher level. With venture capital from firms such as Benchmark Capital and QUALCOMM, they launched their first product, the Visor, in late 1999. The Visor’s Springboard expansion slot created an open platform that radically expanded its functionality, while maintaining compatibility with the PalmOS software that Handspring leased from its primary competitor. Developers quickly came up with modules that included features such as MP3 players, wireless communications, global positioning systems, and digital photography.
Handspring held a successful IPO in 2000, and introduced its next-generation Treo in 2001.23 The Treo was among the earliest smartphones, with an integrated cellular phone and built-in keyboard to enhance e-mail and SMS capabilities. Handspring merged with the hardware division of Palm in 2003, and was renamed palmOne, Inc.24 As Handspring was being folded into Palm, Hawkins knew it was time for him to go back to his “brain dream.”
REDWOOD NEUROSCIENCE INSTITUTE
It had been almost 25 years since the young Hawkins was inspired by Scientific American to study how the brain worked. After being rebuffed repeatedly from academic institutions, Hawkins realized that the only way to advance brain theory was to start his own research institute. Thus, in 2002, Jeff Hawkins founded the nonprofit Redwood Neuroscience Institute (RNI), his third new venture. Located in Menlo Park, California, RNI was a single-task scientific organization, devoted to understanding how the human neocortex processes information.25 The institute’s structure was based on a combination of corporate and scientific principles, and grew to 10 full-time employees under Hawkins’ leadership.
As the inventor of the Palm Pilot and the Treo, Hawkins was elected to the National Academy of Engineering in 2003.26 Already a legend in the computer industry, Hawkins made his mark on neu-roscience when he published his book, On Intelligence, describing his “memory prediction framework” of how brains work. The book was highly acclaimed, selling millions of copies around the world with translations in 16 languages. In it, Hawkins proposed a single principle or algorithm underlying all cortical information processing. While there are different regions in the neocortex that are linked to different functions (e.g., visual or auditory), they are remarkably similar in their structure. The only thing that makes the visual region different is that it is connected to the eyes and not the ears.
One of the first major breakthroughs at RNI was the creation of a proof-of-concept program for Hawkins’ memory prediction theory. Dileep George, then a graduate student in electrical engineer-ing at Stanford University, turned Hawkins’ neuroscience theory into a mathematical algorithm for his dissertation research. The algorithm was then transformed into computer software,27 resulting in the development of a machine-based learning model called hierarchical temporal memory (HTM). George’s research made the operationalization of Hawkins’ brain theory possible.
Hawkins’ Fourth Venture—Numenta/Grok
RNI made so much progress on neocortical theory that Hawkins moved RNI to the University of California–Berkeley to continue its neuroscience research,28 and founded Numenta in 2005 to commercialize the HTM technology. Dileep George joined the Numenta managerial team as co-founder and chief technology officer after graduating from Stanford. (In May 2010, Dileep George went on “an extended leave of absence from Numenta to explore forming a new company focused on applications.”)29
To run Numenta, Hawkins brought in long-time business partner Donna Dubinsky as chief execu-tive officer and board chair. Ms. Dubinsky had spent 10 years at both Apple Computer and Claris, a software subsidiary of Apple, where she gained formidable experience in operations and strategy.
The Dubinsky–Hawkins business partnership is a powerful team with a proven entrepreneurial track record (i.e., Palm and Handspring). “With the combination of Donna at the helm and Jeff leading the technology—that’s about all I needed to know,” says Mr. Saal, investor and a member of Numenta’s board of directors.30 The young company also has a strong technical advisory board comprised of leading scholars in developmental biology, computer science, neuroscience, and electrical engineering.
In May 2012, Numenta hired Rami Branitzky to serve as CEO and oversee the company’s transition from the research and development stage into product commercialization (Ms. Dubinsky remains actively involved as board chair).31 Mr. Branitzky holds an MBA in Information Systems and Finance from New York University’s Stern School of Business, and has more than 15 years of experience in the software industry, including multiple leadership positions within SAP AG. One year later, in May 2013, Mr. Branitzky announced that the company was changing its name to Grok to “better reflect [its] mis-sion to help companies automatically and intelligently act on their data.” He provided the following explanation of the name’s origins:
Merriam-Webster defines Grok as a verb meaning “to understand thoroughly and intuitively.” Robert Heinlein coined the term in his 1961 novel Stranger in a Strange Land. It seemed the perfect fit for a technology that can learn and adapt to changing patterns automatically. Heinlein described the term as a Martian word that meant “to drink,” which evokes imagery of Grok thirstily ingesting fast data streams. The icon next to the name, affectionately known in our marketing department as “the Bug,” actually rep-resents waves of data flowing into Grok.33
Grok’s headquarters is in Redwood City, California, at the center of Silicon Valley, one of the world’s leading high-tech hubs and home to a large number of cutting-edge entrepreneurs, engineers, and venture capitalists. Grok is also surrounded by world-class research universities (e.g., Stanford and UC-Berkeley), providing the company with access to a large pool of talented scientists. Thus, Grok’s location provides it with multiple opportunities for increasing its innovative potential.34
HIERARCHICAL TEMPORAL MEMORY
Grok’s way of building an intelligent machine is fundamentally different from the approach taken by artificial intelligence. AI makes the assumption that intelligence is defined by behavior,35 and can be likened to a box where inputs (data) go in and behavior comes out. According to Hawkins, however, real intelligence is defined not by behavior but by the ability to make accurate predictions about the future:
We experience the world as a sequence of patterns, and we store them and we recall them. When we recall them we match them up with reality and we are making predictions all the time. Assume someone changes your door at home and moves your door knob by two inches. When you get home you put your hand out and reach for the door knob, you will notice it at the wrong spot. The AI people approach this by building a door database which has all the door attributes. This is not how we humans do it. What the brain does is that it is making constant predictions all the time. As you put your hand down on the table you expect to feel it stop. When you walk, even if you miss by 1/8th of an inch, you will know something is changed. You are constantly making predictions about your environment all the time. It is the predic-tion that leads to intelligent behavior
In other words, AI attempts to anticipate every single data permutation, and thus requires an extremely large knowledge database. Constructing such databases is a time-and labor-intensive pro-cess, compounded by problems such as how to collect the data and organize it in useful ways. It also takes millions of lines of computer code for AI to model human intelligence. HTM solves the effi-ciency problem, one of the biggest obstacles faced by AI, by using data from the past to make predic-tions about the future.
Hawkins further explains the difference between AI and HTM by comparing reptiles to mammals: An alligator has very complex behavior including running, sensing, seeing, hearing, and touching. It has a brain, which Hawkins calls the old brain, but it is not considered intelligent. Through evolution, however, mammals developed the neocortex, which is a new (memory) layer on top of the old (sen-sory) brain. As a sensory signal feeds into the old brain, it also goes up into the neocortex where it is stored in memory. In the future when a person experiences similar things again, the neocortex plays back the stored memories, informing the person what to expect next. Hawkins provides an illustrative example: “A rat runs into a maze and learns the maze. The next time it gets into the maze it still has the same behavior but all of a sudden it is smarter because it recognizes the maze and knows which way to go.”37
HTM accomplishes this by feeding the system with information and letting it learn by itself through observations, just as a child learns by observing the world around her. Modeled after the structure and algorithms of the neocortex, HTM consists of a collection of nodes arranged in an inverted tree-shaped hierarchy (Exhibit 1). Each node in the hierarchy performs two basic operations: (1) It looks for common spatial patterns, things that happen at the same time, and (2) it observes their sequences over time. Once it recognizes a sequence from prior experience, the node passes the name of that sequence to its parent node in the level above. Meanwhile, the parent nodes are doing the same thing, passing the recognized sequences up to the next highest level. Each node also knows statistically what is likely to happen next, enabling it to make predictions. It passes these predictions down the hierarchy, telling the respective child nodes, “Here is what you should be expecting next.”
Take vision as an example. Input patterns begin at the low-level retinal signals. More meaningful information, such as lines and regions, is extracted further up the hierarchy. At even higher levels, specific objects and their behaviors are identified. Once the inputs are fully processed, information about the recognized objects and prediction of their behavior over time flows back down to lower levels. Hawkins argues that this process is nature’s data structure for knowledge about the world: “If we understand how the structure works we can build machines that work like this.”38
Taken together, HTM models the behavior of an intelligent being by doing four things. First, it discovers and assigns causes to events happening in the world. Second, once the causes have been discovered, it can use this previously accumulated experience to infer causes of novel inputs. Third, HTM makes predictions regarding future causes and inputs based on probabilities assigned at each level. In a final step, it generates motor behavior from its probabilistic predictions. (For more on how HTM models work, see www.numenta.com/htm-overview/htm-algorithms.php.)
As a result, HTM software applications can be trained for a specific use. Consider the task of differ-entiating a picture of a dog from a picture of a cat. While this is a simple job for even a three-year-old child, it is almost impossible for computers to do. Yet Dileep George demonstrated that this could be done using HTM (see Exhibit 2).39 George first made line drawings of simple objects such as a cat, a dog, and a helicopter. He loaded up his algorithm and trained the computer to identify these objects by animating the original drawings. After the computer was trained, it could slowly put variations of drawings in the right category and even give a probability estimate of how sure it was of its answers.
After 25 years of starts and stops, Hawkins’ vision of creating a “thinking machine” that mimics the processes of the human brain finally became reality. Numenta released the first version of NuPIC (Numenta Platform for Intelligent Computing), a technology platform for HTM implementation, in March 2007.
After considering several structural options, Numenta decided to offer NuPIC as a platform under a free research license which granted rights for broad research purposes. The goal was to facilitate the emergence of a developer community and get “as many people experimenting on the technol-ogy as possible.”40 Once a developer reached a point where commercial value was created, Numenta provided a fee-based commercial license. Hawkins firmly believed the fastest pathway to commercial success was to offer developers the promise of financial returns down the road. To support developers in their efforts, Numenta’s website featured multiple tools such as a forum, wikis, and a blog written by members of the Numenta team. Users were encouraged to become active participants, share their work, report bugs, and contribute to forums. Numenta also developed a Partner Program, a fee-based support initiative in which participants received a high level of consulting and technical support. Prior to Grok’s commercial release, Numenta had attracted between 100 to 200 developers and eight partner companies working with its HTM-based software platform.
Numenta released Grok, a second-generation software based on newer HTM algorithms, for beta testing in 2011. In anticipation of this release, the company ceased active maintenance of NuPIC and its related vision applications. Existing NuPIC users had the choice to either adopt the newer ver-sion, or continue using the legacy software without the benefit of Numenta’s support. In May 2013, Numenta took on the name of its fledgling product (Grok), officially signifying its transition from a research and development company to a rapidly growing, commercial enterprise.
Because Grok is a novel and complex technology, one of the major challenges facing the young company is to explain exactly what Grok does and why other firms should adopt Grok as the pre-ferred solution to big data problems. Citing an IDC prediction that 42% of all data will be machine-generated by 2020,41 Grok believes that its software is uniquely positioned to help companies manage this mass influx of information. Traditional approaches to data analytics require massive data storage capabilities and expert programmers to develop models and analyze the results. Grok bypasses these bottlenecks by providing automated analysis of streaming data, adaptive learning, and building pre-dictive models based on its observations.42 It can also detect real-time anomalies in data streams (see Exhibit 3). Grok’s key capabilities are outlined in Exhibit 4.
Vitamin D partnered with Grok to develop and launch its “Toolkit,” a graphical user interface designed to visualize, analyze, and optimize HTM networks, as early as November 2009. Compared to traditional software, Vitamin D’s program provides state-of the-art detection of people and moving objects in video streams at a fraction of the cost.43 Vitamin D Video may be used with generic web-cams and cameras on both PCs and Macs, eliminating the need for high tech equipment and making sophisticated video monitoring systems for home and business security available to a wide range of customers.44 One of the main advantages of HTM over traditional surveillance systems is that the software can be trained to recognize false alerts caused by insignificant events such as moving leaves or lighting changes. Jeff Hawkins stated, “Vitamin D Video is a compelling first step in making real the promise of intelligent computing.”45
One of Grok’s first commercial customers is EnerNOC Inc., which plans to utilize HTM tech-nology to maximize the efficiency of the frequency reserve energy market. Paired with EnerNOC’s DemandSMART application, Grok will help to predict when energy reserves are needed and optimize ways to reduce demand in order to maintain smart grid integrity. Early results indicate that Grok has already increased DemandSMART’s accuracy by 29 percent.46 EnerNOC also plans to utilize Grok’s ability to detect anomalies to provide advanced warning of mechanical failures.48
Other early adopters include an off-shore European wind farm operation, which utilizes Grok to analyze real-time data from up to 34 sensors on 800 wind turbines. In this setting, Grok has been able to detect patterns in gear box temperatures that predict potential problems, alerting the com-pany of the need to dispatch a repair person. At the same time, Grok is helping other clients to maxi-mize revenues from mobile ad networks, predict stock prices, and to recognize fraudulent financial transactions.49
Despite a promising start, Grok is still in its early development stage. If it works as Hawkins expects, though, Grok has the potential to lead to tremendous technical advances with far-ranging implications for many different industries. Grok is particularly well suited for any data that stream in real-time, where patterns change rapidly, that are too fast for human analysis, and yet quick action is needed (see Exhibit 5 for a list of potential industry applications).50 As a silicon-based system, Grok is exponentially faster than the human brain at processing data and running calculations; sili-con operates on the order of nanoseconds compared to the milliseconds required by human neurons. Another important advantage is that Grok is not confined to the human senses. A multifaceted and diverse HTM sensory system, where powerful computing is combined with infrared, radar, or sonar technologies, means that Grok can perform tasks that are beyond ordinary human capabilities. “It is in the realm of [these] exotic senses that the revolutionary uses of intelligent machines lie,” Hawkins suspects.51
As an example, Hawkins points out that we constantly collect real-time data about the earth via space satellites and on-ground weather stations. Grok could be used not only to combine all these varied data from across the globe, but also to predict events such as hurricanes and floods more accu-rately. Hawkins believes that Grok could even be used to pinpoint the causes of global climate change more accurately.52 Recognizing Grok’s potential geological applications, Lockheed Martin Advanced Technology Laboratories has expressed an interest in applying HTM technology to satellite image analysis.53 At least one oil company has approached Grok wanting to use its software to find geologic patterns that could lead to the discovery of new oil resources.
Prediction markets, championed in James Surowiecki’s 2004 book The Wisdom of Crowds, represent another good candidate for Grok. Betfair and Intrade are successful online trading exchange websites focused on the making and commercialization of prediction markets. Here, Grok could be used to predict machine failures, tornados, and even stock prices.54 Similarly, HTM technology could be used by government agencies such as the FBI and CIA to analyze e-mail records and voice data in order to predict the likelihood, date, location, and type of future terrorist attacks
Meanwhile, an (anonymous) automaker is interested in using Grok to build a smart car that could understand traffic and predict dangerous situations. This would be achieved by installing the HTM-trained system with outward-looking sensors (camera, infrared, radar, ultrasound) attached to the vehicle. If a ball rolls into the street or smoke comes from the car ahead, a car equipped with Grok may know to step on the brakes or even accelerate to get out of harm’s way.55 Grok continues to actively seek new partners who are able to collaborate and to commit dedicated resources, and who have access to training data, domain expertise, and go-to-market ability.56
Grok faces some particular challenges relating to the protection of its intellectual property against imitators and the rise of new competitors.
INTELLECTUAL PROPERTY PROTECTION
Grok holds two patents related to its HTM technology, and it has several other applications pending.57 The first, U.S. patent number 7,620,608, was issued on November 17, 2009, on “Hierarchical computing modules for performing spatial pattern and temporal sequence recognition.” The second, U.S. patent number 7,624,085, was issued on November 24, 2009, on “Hierarchical-based system for identifying objects using spatial and temporal patterns.”58
A patent is a grant made by the U.S. government that confers the assignee the sole right to make, use, and sell that invention for a limited period of time (between 14 and 20 years).59 Patent law is based upon the Patent Act of 1952, codified in Title 35 of the United States Code. According to the statute, one who “invents or discovers any new and useful process, machine, manufacture, or any com-position of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title.”60 To receive a patent, an invention must be judged to consist of patentable subject matter, possess utility, and be novel and non-obvious. It is not unusual for the U.S. Patent Trade Office (PTO) to take two to three years to approve a patent application.
The effectiveness of patents as a way to protect intellectual property rights varies widely across industries. Whereas patents are considered critical to shielding innovations in the pharmaceutical industry,61 patent protection in the computer software industry is much weaker. Reverse-engineering of computer code is generally easier, not too time-consuming, and costs only 40 to 60 percent of the original investment. Nevertheless, if a company owns a technology that becomes an industry standard (e.g., Microsoft’s and Intel’s Wintel standard for the personal computer), it can become the source of a sustained competitive advantage.
Recently, there has been intense debate over whether software patents should be granted at all. Prior to the Supreme Court’s State Street Bank decision of 1998, abstract ideas, including mathematical algorithms, were not considered patentable. The State Street Bank decision reversed that position, stat-ing that an invention was eligible for patent protection if “it produces a useful, concrete and tangible result.” New patent applications on computer software surged following the decision. For example, Microsoft held about 600 patents on the day the State Street Bank verdict was rendered,62 but now holds more than 9,000. Ironically, the State Street Bank decision has proven to be both an asset and a liability to software developers. While it is now easier to protect their intellectual property, many find it nearly impossible to write new software without infringing on numerous existing patents. The resulting multimillion-dollar lawsuits consume valuable resources that would arguably be better spent investing in further research and development.
The In re Bilski case illustrates efforts to overturn the State Street Bank decision.63 This case concerns a patent application by Bernie Bilski involving a method of hedging risks in commodities trading. The PTO rejected the patent as too abstract, and Bilski appealed the decision to the Federal Circuit, where the decision was reaffirmed. The case was then appealed to the Supreme Court and on June 1, 2009, received certiorari (an order by a higher court directing a lower court to send up a given case for review).64 The Supreme Court justices upheld the prior court’s judgment in their June 2010 ruling, stating that the “machine-or-transformation test” was the sole determinant of whether an inven-tion is patentable (effectively reversing the broader State Street Bank definition). In order to qualify for patent protection, a “process” must now either be “tied to a particular machine or apparatus” or “transform a particular article into a different state or thing.”65
All this legal wrangling has created considerable uncertainty for Grok, which is reliant on patents to protect its technology from copycats as it enters commercial markets. On January 11, 2011, the company released the following statement regarding its patent rights:
…Any commercial or production use of HTM technology that infringes on [Grok]’s patents will require a commercial license from [Grok]. For these purposes, “commercial or production use” includes training an HTM network with the intent of later deploying the trained network or application for commercial or production purposes, and using or permitting others to use the output from HTM technology for com-mercial or production purposes.66
Meanwhile, Grok is not the only company pursuing intelligent computing. Identifying trends in rivers of data is an important goal in a variety of sectors, ranging from scientific research to national security. A breakthrough in any industry could easily spread to other areas because of the similarity in basic pattern-finding processes. Therefore, any successes for Grok could bring tremendous oppor-tunities for future growth, but at the same time attract capable and powerful competitors.
IBM’s “InfoSphere Streams” technology, previously called System S, is one direct rival. It is a major research initiative at IBM aimed at rapidly analyzing real-time data as it is being streamed from many sources. Applications include increasing the speed and accuracy of decision making in fields as diverse as homeland security and Wall Street trading.67 Stream computing emerged in response to the need for faster data handling and analysis in business and science. It also tries to tackle the issue of the growing flood of information in digital form, including websites, blogs, e-mail, video and news clips, telephone conversations, transaction data, and electronic sensors. Traditional computer analyti-cal and data-mining processes collect data, store them in a database, and then search the database for patterns or run queries. In contrast, stream computing uses advanced software algorithms to analyze the data as they stream in real time. It uses text, voice, and image-recognition technology to determine whether some data are more relevant to a particular problem than others. “It’s a computing system that can morph and adapt to the problems it sees,”68 says Nagui Halim, technology director.
A slew of other companies are equally eager to enter the intelligent-computing industry. Google has acquired PeakStream, a startup in stream computing, in hopes of improving its video search func-tions.69 Fair Issac Corporation (FIC) acquired HNC Software (HNCS), a neural-network company and a member of the Standard & Poor’s small cap 600. The merger is a way of combining expertise in analytics and credit scoring (FIC) with decision-management technology and fraud detection (HNCS) for customer acquisition and relationship management strategies.70
Grok is the result of Hawkins’ lifelong pursuit of brain theory. With Grok, Hawkins believes that the time has finally come to build intelligent computers that “think” using the same principles as human brains. In his words, this marks the true “beginning of the computer era.”71 With Grok’s prom-ising start, however, this 52-year-old “reluctant entrepreneur”72 is facing yet another pivotal decision: What is the best way to keep this company going and ensure Grok’s long-term success?
While Palm and Handspring made Hawkins and Donna Dubinsky independently wealthy, they can-not continue to fund Grok indefinitely on their own. One of the primary challenges for a technology startup is to obtain funding and maintain a steady cash flow to support natural growth. New algo-rithms mean new applications and new business opportunities, but will also require new programmers and support staff. At some point, it will also become necessary to market Grok more aggressively and make clear to potential customers Grok’s advantages over AI and competing technologies like IBM’s InfoSphere Streams. Grok has nowhere near the financial resources of a giant like IBM, which can eas-ily outspend and outpromote Grok, even though Grok has (what Hawkins believes to be) a superior technology.
Grabbing a Diet Mountain Dew from the cooler in Grok’s lobby, Hawkins wonders what Grok’s future might hold. New technologies can catch the eye of existing firms, making the fledgling com-pany an attractive acquisition target. He has been down this path before with Palm, when he made the decision to sell to U.S. Robotics in order to generate the cash to bring the Palm Pilot to market. While the relationship with U.S. Robotics was fruitful, their joint success was part of the attraction for 3Com’s subsequent acquisition, which marked the beginning of the end of his relationship with the company. As hard as it had been for him to leave Palm, Hawkins couldn’t fathom abandoning his “brain theory” baby, let alone permitting someone else to decide how it should be “raised.”
Holding an initial public offering (IPO) is another option but comes with its own list of potential risks. To begin with, Hawkins is not sure that Grok’s technology is sufficiently developed (yet) to deliver the reliable quarterly financial returns that shareholders demand. Timing also plays a cru-cial factor in determining the initial stock price, and with the sluggish financial recovery, investors are not as eager to fund uncertain technologies as they have been in times past. Does Hawkins have the resources to keep Grok going until the IPO market recovers? Assuming an IPO is successful, can Hawkins live within the constraints of a publicly traded company? Or will he find a fate similar to the one faced by Steve Jobs in 1985, who was pushed out of the company he personally founded and bankrolled for so many years?
A more selective approach would be to form strategic alliances or find private equity investment partners of Grok’s own choosing. On the one hand, this would permit Grok to share the costs and risks of commercialization with other companies. On the other hand, Grok would also have to share at least some control over its core technology, not to mention many of the rewards of its prior labors. An even bigger concern is that alliances can always be broken. The business world is full of companies that shared a novel technology with a trusted partner, only to watch the partner walk away and take full advantage of what they learned. Just like any relationship, alliances take time to form and maintain so that both parties continue to feel their needs are being met.
Although described as “reluctantly making a choice” every time, Hawkins has been incredibly suc-cessful in “making the right choice at the right time”73 throughout his business career. Will he be able to make the right decision yet again? Hawkins knows this situation holds a tremendous amount of uncertainty: the technology itself, commercialization, IP protection, competition, and so on. While Hawkins feels excited to see Grok making such great progress, he realizes he needs a solid plan for the future to ensure that his dream of making computing “more human” becomes a reality.