James F. Kenefick - Azafran Capital INSIGHTS Vol. 9

Page 1

YOUR EYE ON INNOVATIVE MACHINE LEARNING SOLVING REAL WORLD PROBLEMS

Azafran Capital Partners

INSIGHTS

issue NINE

Neurotech Gathers Momentum

issue nine FOCUS

The brain-machine interface, once the providence of science fiction, appears on the horizon

At Azafran Capital Partners, we are an early stage venture fund investing in companies ($2M to $8M) that are using deep learning and machine learning, emphasizing voice, acoustics and imagery datasets in the health, wellness, IoT / automation and enterprise spaces.

In recent years, the digital health sector has matured from simplistic apps that track a small number of vitals (wellness), into much more highly regulated medical devices (health). We are now at the threshold of technologies being developed that blur the lines between computers and biology, with many of the leading startups in the space targeting the brain. The burgeoning field of neurotechnology involves brain-machine interfaces, neuroprosthetics, neurostimulation, neuromonitoring, and implantable devices intended to not only augment nervous system activity, but expand its capabilities. Investors from Google to Amazon, Elon Musk and even Leonardo DiCaprio are lining up to place their bets on the prospect of joining the brain with machines. Even Facebook has made public its plans to develop brain-machine interfaces where users can type via their thoughts. For decades now, science fiction has provided a bevy of images and examples where machines think like and eventually outthink humans, from the Terminator to The Matrix, Battlestar Galactica and more recent examples like The Feed, most of these stories don’t typically end well for us. While those endings make for compelling entertainment, most efforts today in neurotech are highly focused on specific aspects of the interface, which will most likely someday accumulate into the reality of cyborg-like beings, but for now the efforts are much more about solving specific problems. Some companies in the space have reached commercialization from such disparate places as Formula 1 racing (MindMaze) to enterprise contact centers with emotion-sensing algorithms/ systems being applied to increase customer loyalty (EmoShape), either with a limited human-machine interface or other neurotech as the foundation. Due to the invested capital needed to scope out these companies and the timeframe needed for exit, most of the companies currently in this space are typically not a solid fit with Azafran Fund One investment thesis. For example, five of the Neurotech of the companies our research team is tracking (see Use Cases on next page for more details) have received, collectively, over $460M in funding thus far. Specifically, our team is keeping an active eye on companies creating products in the space requiring less capital-intensive rounds and that will commercialize in three to five years years versus ten or twenty years down the road. Further out on the spectrum, towards machines “thinking” on their own, one prescient lab example is from the MIT-IBM Watson AI Lab’s work with generative adversarial networks or GANs. Researchers wanted to probe a GAN’s learning mechanics by feeding it various photos of scenery—trees, grass, buildings, and sky, with the goal to see whether it would learn to organize the pixels into sensible groups without specific instruction. “Stunningly, over time...it [the GAN] had managed to group tree pixels with tree pixels and door pixels with door pixels, regardless of how these objects changed color from photo to photo in the training set. That it is possible suggests that deep learning can get us closer to how our brains work than we previously thought.” Source: MIT Tech Review, A neural network can learn to organize the world it sees into concepts—just like we do Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Issue Nine of INSIGHTS focuses on the intersection of neuroscience + neuro learning + AI, which has enormous implications for humanity but is still downstream from our day-to-day focus. However, enough examples and tech are emerging that our team definitely has an eye on companies that could fit into our investment thesis in the near future. In the meantime, we remain excited and upbeat about the prospects for the future of this incredible opportunity - if nothing else. we recognize that some of the most powerful businesspeople, engineers and investors across the world are placing big bets on when the machine-brain interface happens, and all the implications from everyday life to the evolution of humankind as a whole.

Data sets are fundamental building blocks of AI systems, and this paradigm isn’t likely to ever change. Without a corpus on which to draw, as human beings employ daily, models can’t learn the relationships that inform their predictions. But why stop at a single corpus? An intriguing report by ABI Research anticipates that while the total installed base of AI devices will grow from 2.69 billion in 2019 to 4.47 billion in 2024, comparatively few will be interoperable in the short term. Rather than combine the gigabytes to petabytes of data flowing through them into a single AI model or framework, they’ll work independently and heterogeneously to make sense of the data they’re fed.” -

VentureBeat: Multimodal learning is in right now-here’s why that’s a good thing Volume 1 Issue 9 - Page One


Neurotech Use Cases As noted previously, the Azafran team does not see the Neurotech segment making up a significant portion of Azafran Capital Fund One’s portfolio, but we are keeping a serious eye on some companies in the space. Below are examples/use cases of early- and mid-stage neurotech companies working across brain-machine interfaces, neuroprosthetics, neuromonitoring, and neurostimulatory devices. ●

● ●

A non-invasive (wearable) neural interface platform that lets developers reimagine the relationship between humans and machines with new, intuitive control schemes; Emotion Chip technology that teaches intelligent objects how to interact with humans to yield a positive result; A computing platform that captures brain activity upon intent, creating a new operating system for computers - a brain O/S. The company has designed an intuitive mind/machine interface, which utilizes pre-real-time decoding of brain signals via neural prediction; One underlying technology that can help accelerate Neurotech as a whole by providing an AI-Driven Optimizer, which makes Deep Neural Networks faster, smaller and energy-efficient from cloud to edge computing. A neural interface that could restore mobility to paralysed patients is ready for human trials - to allow paraplegics and quadriplegics to move by directing an exoskeleton with their thoughts – without the need for complex and invasive brain surgery; Developing implantable brain–machine interfaces (BMIs) - via a "sewing machine-like" device capable of implanting very thin (4 to 6 μm in width) threads into the brain - have demonstrated a system that reads information from a lab rat via 1,500 electrodes and anticipated to start experiments with humans in 2020.

market PREDICTIONS Global Neurotechnology Market 2018-2022: Largest Segment is Currently Neuromodulation, Followed by Neuroprosthetics and Neurosensing At the forefront of neuroscience and neurotechnology innovation are targeted applications and products treating specific diseases and other afflictions that have not been solved by conventional medicine and research. The worldwide market for neurotechnology products will reach $13.3 billion by 2022, according to a report (The Market for Neurotechnology) issued earlier this year by ReaseachandMarkets.com The neurotechnology market is divided into four segments: neuroprosthetics, neuromodulation, neurorehabilitation, and neurosensing. These main areas of focus for shorter term applications and treatments coming to market include Parkinson's disease, chronic pain, and urinary incontinence, as well as emerging markets such as bioelectronic medicine, obesity, migraine, sleep disorders, and psychiatric disorders. Source: Research and Markets Press Release

NEWSWORTHY… Neuroethics Meets Artificial Intelligence “The history of Artificial Intelligence (AI) is inextricably intertwined with the history of neuroscience. In spite of this intimate link between AI and neuroscience, ethical reflections on these two disciplines have developed quite independently of each other and with little interaction between the two research communities. On the one hand, the AI ethics community has focused primarily on issues such robot rights, algorithmic transparency, biases in AI systems, and autonomous weapons. On the other hand, the neuroethics community has primarily focused on issues such as pharmacological enhancement, brain interventions, neuroimaging, and free will. Given this relationship between neuroscience and AI, the neuroethics and the AI-ethics communities must no longer operate in silos but pursue greater mutual and cross-disciplinary exchange. Creating a common ethical discourse at the brain-AI interface will likely yield benefits for both fields. There are already some positive examples. For instance, the International Neuroethics Society (INS) has often featured panel discussions on AI and other emerging technologies. In November 2018, researchers gathered in Mexico to discuss the “Neuroethical Considerations of Artificial Intelligence”. In May 2019, researchers at LMU Munich organised a conference titled “(Clinical) Neurotechnology meets Artificial Intelligence” focusing on the ethical, legal and social implications of the two fields.” Link here to full blog post. Source: The NeuroEthics Blog, Marcello Ienca Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Volume 1 Issue 9 - Page Two


from the ROAD Highlights from Conferences, Pitch Days, Openings and other Events around the U.S. and World Standard procedure and part of the Azafran team process - we are out on the front lines every week, attending and speaking at conferences, events, pitch days, openings. We are all seasoned entrepreneurs, operators and start up geeks so this part comes easy for us. It also is crucial to delivering on the Azafran investment thesis, one more way we are staying on top of the latest companies, partners, and tech that is hitting the street, and most importantly, helping us to find the best companies that fit our thesis.

BCI Summit Quantum New York, NY James and Jock were out in force at the BCI Summit, where James helped lead some panels and both met with many great companies and potential partners. BCI is a deep-tech investment summit exploring the opportunities in Quantum technologies and advancements. Topics covered included analysis of current QC technology status, challenges and risks. Other topics covered focused on current and expected applications in biotech, chemistry, material science, cybersecurity and others. As an organization, BCI conducts extensive research and and analysis to find the most critical industry topics and panel opportunities. Experts from around the world collaborated at this amazing event and helped shed light on the issues from all angles via panels as well as audience Q+As.

Upcoming Events and More From the Road In the upcoming month, the Azafran team will be traveling extensively in Europe and the West Coast U.S. meeting with potential LPs, partners and companies. In January, a number of us will be on hand at CES in Las Vegas to kick off a busy and exciting 2020. As always, please let us know if you would like to schedule a meetup as we are constantly criss-crossing the globe and it won’t be long before we’re in your corner. Please call or email Zubeyda

Azafran Capital Managing Partner, James F. Kenefick, leading a panel at the BCI Summit in NYC

NEXT Round Investment Conference Ljubljana, Slovenia Daniel and Kristina were on hand for the 2019 Next Round Investment Conference, which is the premier startup event in Slovenia, attracting an aggregate investor potential of over 1 Billion Euros. Daniel was “quite impressed with the push for deep technology companies in Europe. There is definitely momentum in the Artificial Intelligence front and it is growing. To add, it was impressive to see many companies looking at the global or outside their domestic country for the target market. Lastly, since Azafran Capital Partners is geographically agnostic, the amount of Voice Tech and AI companies coming from Europe makes it extremely interesting.” Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Volume 1 Issue 9 - Page Three


Investment Segment Highlight: Machine Learning Component: Data analysis and automation of models

IN THE KNOW

Description/Definition: From Wikipedia: Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Pete Warden writes in his blog (Why the Future of Machine Learning is Tiny), “I’m convinced that machine learning can run on tiny, low-power chips, and that this combination will solve a massive number of problems we have no solutions for right now...machine learning on tiny, cheap battery powered chips is coming and will open the door for some amazing new applications!” ML is now so pervasive, at a high level, there are examples from all corners: Yelp using ML for image curation at scale, Pinterest for improved content discovery, and then to deeper tech where Google is using ML to build neural networks and machines that dream, and Baidu’s DeepVoice, a deep neural network that can generate entirely synthetic human voices that are very difficult to distinguish from genuine human speech.

The Azafran Take: McKinsey & Company, in Notes from the Frontier: Modeling the Impact of AI on the World Economy, has predicted that by 2030, 70 percent of businesses will use AI. More predictions: ML, globally recognized as a key driver of digital transformation,will be responsible for cumulative investments of $58 billion by the end of 2021. In addition: ● ● ●

The global ML industry, growing at a CAGR of 42 percent, will be worth almost $9 billion in the latter part of 2022. The neural networks market will be worth over $23 billion in 2024 The Deep Learning (DL) applications market in the US alone has been predicted to shoot from $100 million in 2018 to $935 million in 2025. Source: Machine Learning to 2019: Tracing the AI Growth Path

How AI and neuroscience drive each other forwards Chethan Pandarinath wants to enable people with paralysed limbs to reach out and grasp with a robotic arm as naturally as they would their own. To help him meet this goal, he has collected recordings of brain activity in people with paralysis. His hope, which is shared by many other researchers, is that he will be able to identify the patterns of electrical activity in neurons that correspond to a person’s attempts to move their arm in a particular way, so that the instruction can then be fed to a prosthesis. Essentially, he wants to read their minds. “It turns out, that’s a really challenging problem,” says Pandarinath, a biomedical engineer at Emory University and the Georgia Institute of Technology, both in Atlanta. “These signals from the brain — they’re really complicated.” In search of help, he turned to artificial intelligence (AI). He fed his brain-activity recordings to an artificial neural network, a computer architecture that is inspired by the brain, and tasked it with learning how to reproduce the data. The recordings came from a small subset of neurons in the brain — around 200 of the 10 million to 100 million neurons that are required for arm movement in humans. To make sense of such a small sample, the computer had to find the underlying structure of the data. This can be described by patterns that the researchers call latent factors, which control the overall behaviour of the recorded activity. The effort revealed the brain’s temporal dynamics — the way that its pattern of neural activity changes from one moment to the next — thereby providing a more fine-grained set of instructions for arm movement than did previous methods. “Now, we can very precisely say, on an almost millisecond-by-millisecond basis, right now the animal is trying to move at this precise angle,” Pandarinath explains. “That’s exactly what we need to know to control a robotic arm.” -

Source: Nature, How AI and neuroscience drive each other forwards, 2019 - (link)

Azafran Capital Partners Leading Panel at BCI Summit: New York, NY

Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Volume 1 Issue 9 - Page Four


What’s In a Brain, Anyway? A short, useful description from the Data Driven Investor: “Our brain is a roughly 3-pound mass with about 100 billion brain cells strewn throughout. About 1/3 of those are electrically excitable neurons which have a resting charge of around -70 millivolts. The other 2/3 are what are referred to as glial cells or basically the housekeepers. These cells are not nearly as electrically excitable as neurons. Their job is to clean up debris, maintain proper pH and water levels, provide neurons with structural support and a long list of many other functions. This incredible complexity of the brain provides a blueprint for the development of AI systems that are in turn used to quantify and help understand the inner workings of nature’s most sophisticated computer.

Source: Data Driven Investor

When it comes down to it, computer scientists and neuroscientists are essentially asking the same questions and attempting to understand similar systems. They are both analyzing individual components, the calculations they do and how these components and calculations fit into the system as a whole. A neuron is to the brain as a transistor is to a computer chip. Therefore, various AI systems inevitably provide a quantitative window into the functioning of our own neurological systems.” Source: Data Driven Investor: Neuroscience and Artificial Intelligence; How They are Perpetuating Each Other’s Progress (link)

Feedback, going forward Thank you for the work you are doing in the world and your continued support of Azafran INSIGHTS’ monthly journey into the intersection of deep learning and machine learning, emphasizing voice, acoustics and imagery datasets in the health, wellness, IoT and enterprise spaces. Our intention is to use this as a vehicle to open a dialogue with each of you, together as a group, and we strongly encourage and welcome your feedback. We’ve made feedback/comments simple, please reach out via any of the means below and/or visit our website at AzafranCapitalPartners.com. We will be publishing INSIGHTS at least monthly going forward and look forward to growing this sector together and all the benefits to our respective organizations, as well as those to humanity as a whole that are here today and coming down the road.

voice-techINDUSTRY At a Glance: Top 5 Markets & Global

Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Volume 1 Issue 9 - Page Five


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.