The newest (and at 6’6”, definitely the tallest) member of the Gazelle.ai team is Dr. Hugh Kelley, a machine learning and mathematics expert with a Ph.D. in International Economics and Mathematical Psychology.
As Chief Economist, Hugh’s main role is to develop and train the algorithms that are at the heart of the Gazelle.ai platform – identifying the early stages of corporate expansion to give economic developers a competitive edge in attracting valuable investment and job growth.
Hugh provided some great insights in a recent Q&A, as we uncovered more about his extensive background in the world of machine learning and AI.
How does a professor educated in Wisconsin and California end up working for a Montreal-based start-up?
Actually, since California, I have worked at universities and research institutes across six countries and nine cities across Europe, Canada and the U.S. So a move here to Montreal is really just a hop over the back-yard fence, and fortunately I am tall enough to step over!
Because of my international exposure, combined with that of my colleague Nadine Jeserich (Vice-President Analytics), our contacts – in particular with Dr. Graham Toft of Growth Economics, Inc – and our publications, we came to the attention of Steve Jast.
Fortunately, Steve identified skills I could add to the existing expertise at ROI Research on Investment, which would in turn enhance the Gazelle.ai platform. I was delighted to become a part of the team in this beautiful city which I have frequently visited since the 1970s.
Big data is a big buzzword these days, and harnessing it the right way is a key differentiator for tech companies. Without revealing the recipe for Gazelle.ai’s secret sauce, can you talk a little about your approach to selecting the right data for early company expansion?
It’s a long answer! But some historical context is important to really understand it.
In my research since the late 1990s I frequently used ‘big data’, before I became familiar with that term. In the past we used GIS satellite and ground-truthed image data for 50-year multi-agent land use simulations; the big part of this data becomes obvious when you consider each pixel on an image represents a possible decision unit for a farmer agent; depending on the scale of the image and the number of landscape characteristics you consider the number of units adds up quickly.
We also integrated this data with other non-spatial information gleaned from local and national public sources. Another application analyzed financial news transcripts utilizing text string searches (similar to spider programs now used to crawl websites), which was used to created usable data for forecasting particular financial prices following news events.
Speaking of “big” data, I have worked with 4.2 million multi-variable across-time records describing small farm input-output activities for 10 countries and 15 years in Europe. And here at ROI we work with firm-level data, which for the US alone could ultimately amount to 10+ million records for a relevant database.
Another aspect of big data, as you mention, is identifying data sources. Selecting the most relevant datasets and variables provide maximal predictive accuracy along with the best cross-dataset comparability/similarity. In general, the ability to access, transform, and analyze big data in all its forms, either web scraped or news transcripts/feeds, as well as incorporating the millions of records and 100s of variables relevant for firm-level analysis presents significant software and hardware challenges to overcome.
However, with the team and resources here at ROI and my previous years of experience, we are poised to continue to intensively utilize this information, and this places Gazelle.ai in a state-of-the-art position in terms of our database.
AI has moved from the sci-fi/alien invader world into real-world business applications very quickly. What are your thoughts on artificial intelligence and machine learning as they apply to the business world?
As a science fiction fan my first experiences with the term AI were certainly with both friendly and not-so friendly artificial helpers – R2-D2 and Skynet come to mind. However, since then my experience with this methodology has become more practical and roughly resembles the evolution of this field since the 1950s.
Some of my later work at the University of California applied existing and customized versions of multi-layer (now called deep learning) neural network and machine learning algorithms to understand human learning in human subject decision-making experiments; to date I have fit these models to over 2500 human subjects performing finance portfolio optimization, asset price forecasting, spatially-explicit land-use decision making and more, just to name a few applications.
One application of these methods, for which I was one of the earliest researchers, is to take calibrated (to humans) versions of the models and embed them in various economic models as alternatives to the traditional perfect intelligence economic decision-making agents.
More recently at ROI we are leveraging my 20+ years of experience with these algorithms across these diverse fields to predict key firm activities, identify key drivers of important firm actions, and of course to forecast future or missing data, and crucially to uncover important relationships among variables and firm performance that would not be suggested by traditional economic theory.
Our next step is to enhance the AI elements currently operating in our system, whereby we move beyond offline model fitting and off- and on-line provision of forecasts, to a point where the programs themselves make decisions about which variables or groups to attend to at a given moment, conditional upon current or past market conditions, a client’s specific needs, and based on the dynamic multi-layer associations our business intelligence platform continually uncovers, vets, and rejects.
There’s been a lot of press coverage about the AI community in Montreal in recent months. Have you picked up on this, and how do you feel the city is positioned versus the rest of the world in developing AI overall?
In fact, I have become aware of the growing critical mass here in Quebec and Montreal. As an academic one may read many texts and articles, and not necessarily pay as much attention to where the research comes from. But as I joined ROI I became more aware of what goes on directly outside my office door. I am excited to be able to be a part of this growing community, and I think I bring some unique cross-disciplinary skills to this community that will create a competitive advantage for ROI and the broader community.
Given the potential of the data within the Gazelle.ai platform, there’s been some discussion about rolling it out to a broader market, namely “deal professionals” in areas like real estate, legal, and financial services. Would it be a big leap to tweak the existing data in order to get the relevant information into their hands?
Importantly, one of the main themes in my research has been to investigate and highlight the cross-discipline generalizability, compared to many static or even dynamic economic models.
Extending out this methodology to deal with financial markets would be no problem, as many applications since the 1980s were designed for this in the first place). Real estate would also be relatively easy, since it’s often a price and spatial problem, similar to my previous work. Legal could be more difficult if the texts of the laws are not computer searchable across the globe (Australia is one exception, and a potentially interesting market in this area).
Simply put, with the team now here at ROI, if a company have a unique problem in finding the right clients or investment opportunities that no one else has considered, we are the go-to firm given our strengths in data, science and AI.
We are confident that no matter how diverse the Gazelle data set becomes, this can only be a strength.