Welcome to our new website!
Dec. 16, 2022

Casper Wilstrup - Self-management, transparency and a new kind of AI to revolutionize science

Casper Wilstrup - Self-management, transparency and a new kind of AI to revolutionize science
AI

So it's 2012, and Casper is at the end of a sabbatical he started after leaving the hardware startup he started during his studies, Unispeed, at the Niels Bohr Institute in Copenhagen.

Just as he was starting to head back into the job market, deep learning achieved a major breakthrough in object classification, propelling the technology to the heights it is now.

This is also when AI sparked Casper's interest, and he remembered his old idea.

If you enjoyed this episode, why not recommend it to a friend?

Homepage: https://www.deeptechstories.io

Podcast Twitter: https://twitter.com/deeptechpod

My Twitter: https://twitter.com/sturmerph

LinkedIn: https://www.linkedin.com/company/deeptechstories

Music by Nathaniel Drew x Tom Fox:

https://www.nathanieldrew.com/

https://tfbeats.com/

Design by Amadeus Schwed:

https://elementsofpuremotion.com/

Transcript

Casper:

And what it does do is bring out extreme amounts of creative energy.

Without this approach, we couldn’t have built the QLattice®. I can say that for absolutely sure. If you try to put QLattice development into a rigorous top-down hierarchical scheme – if you do that and have some boss figuring out what everybody should do now – that boss would have to be a genius beyond human capabilities to achieve what we have achieved.

Philipp:

Welcome back to Deep Tech Stories, a podcast making creators, entrepreneurs, and idealists in the deep tech space accessible by highlighting their stories and pulling their ideas from the lab into the real world.

I’m Philipp Stürmer, and on the show today: Casper Wilstrup and the road to coming up with a new kind of AI.

So it’s 2012, and Casper is at the end of a sabbatical he started after leaving the hardware startup. He started during his studies at Unispeed, at the Niels Bohr Institute in Copenhagen. Just as he was starting to head back into the job market, deep learning achieved a major breakthrough in object classification, propelling the technology to the heights it is now. This is also when AI sparked Casper’s interest, and he remembered his old idea.

Casper:

What happened was I did due diligence, so I analyzed a company on behalf of a large Danish venture fund, a company called Blackwood Seven, that I really liked. So they were using AI technology, I think that’s when AI really jumped into my mind as a thing. Before that, I was thinking about my ideas and data analysis as that: Data analysis. Now I gave it a new name. People had figured that out before me, but for me, it suddenly became: Oh! So what I’m really thinking about when I’m thinking about these graphs and stuff, that’s AI (at least in a certain sense).

Philipp:

So you still kept thinking about your initial idea from…

Casper:

Not so much the initial idea, but a lot of the additional thinking I’d done later on. Just data analysis, large-scale data processing, patent finding, and stuff that has played a big role in a lot of the other things I’ve done, was suddenly given a conceptual name. And that was AI.

And then I also studied deep learning on neural networks, as they were called in the mid-90s as well, they were a little hot then and there was a small flourishing of neural networks there. I didn’t know what a neural network was, not really.

So then in 2013, ‘14, I came across this entire space of AI and these pretty impressive founder people. So like I said before, what I really liked about them was that we had real business experts in the place they wanted to operate, which is something called media mix marketing which is about not what most people do in tech startups in marketing, but on the highest level. Where you study how you actually allocate your resources in terms of, say, TV advertisement or newspaper advertisement or billboard advertisement, those kinds of things. So that’s called media mix modeling or media mix planning. And their idea was to use AI and machine learning in that space. And I just liked the idea and I liked them and I thought that they were onto something and I felt I had something to contribute with. So I actually ended up joining Blackwood Seven.

So I left Best Brains and jumped on board Blackwood Seven as the CTO. And that was a very interesting experience. Again, I had a lot of the ideas we talked about before about how to organize and how to bring the best out of people and how to succeed. And we did do well. When I joined, we were five people. And when I left, we were 250. So, I guess that was fairly good.

But I also, throughout the process, had thoughts that we were perhaps not doing things the way I would like them to be done. So in 2017, I left the company and took another sabbatical. I took about a year and I was just thinking about what to do now. I worked a little bit for Best Brains again. That’s when it happened. That’s when I said, all right, AI is cool. And I’m pretty good at coding things that run fast. And I can do algorithms. I have most of what it takes. I also have some business expertise now.

I couldn’t do it by myself though, because there were a lot of things to be done to build this. It’s like… Inventing a new AI approach from scratch can seem like a rather daunting task. Neural networks weren’t invented in a single day by a small team, right? So I realized there that I couldn’t do this by myself. So I wrote a prototype and I demoed that to a couple of my friends, a high-performance computer.

The prototype… I think there were six months where I was just sitting in front of my computer and coding these prototypes and visualizing them in this 3D space, these different connections of path-finding algorithms.

Philipp:

Did you write it in Python or Fortran?

Casper:

No, I tend to write in C and I still do. My two most used languages are Python and C.

But as most people say: It’s much harder to code in C. It’s almost my native language. I can always think in C. I guess it comes from my assembly approach from back in the ZX Spectrum days.

So machine code, C, whatever. I’m a little peculiar in that, but it also means that I can get the most out of my algorithms. And then I showed it to my network of strong AI high-performance computing engineers to put together a founding team that wanted to take the risk of building a completely new unproven brainchild-of-Casper AI algorithm to take on the big guys who’ve been working on neural networks for a couple of decades. And some people said yes, and some people said no.

Philipp:

How many did you ask?

Capser:

Ten, something like that. And we ended up being seven people who actually founded Abzu.

Philipp:

All technical?

Capser:

No, six very technical people. And then depending on how you count me, because I really think of myself as both a technical and a business-oriented person.

But anyway, I was in the role, at least in the beginning, of being very technical. And then we knew that we had to have at least somebody who wasn’t a tech nerd. And then we joined forces with my brother, Jonas, who is the seventh founder and the only non-technical founder.

So seven people in all set out to build this technology. We didn’t have a name for it. First, we called it, I think in the prototype I called it Machine Cognition Labs. So that was the name of the company before it was founded.

Philipp:

It rolls nicely over the tongue.

Capser:

Yeah, exactly. Let me say it was voted down when we had the founding session for Abzu, which was November 17, where the people who ended up founding the company met (except for one person) in Nectar Conecter, which is right in the middle of nowhere in the Spanish Pyrenees, where we spent a long weekend or a little short week just doing the business plan for: What are we going to do? Are we going to found this company? What should we call it? How should we organize? Who owns what? All that stuff that it takes to agree on.

Philipp:

I guess that was reasonably complicated with seven people.

Capser:

It wasn’t complicated. It took some time, but nobody really wanted to get the better of anybody else. So in that sense, we got off to a good start by being reasonable, friendly, rational, and analytical people, all of us.

But there was still a lot of detail about it. Yeah. So what if a founder actually wants to leave, then it’s pretty important. That’s one of the most important things to have agreed upon upfront. Otherwise, it can destroy a company. And particularly if you’re a lot of founders, then it’s very likely to have-

Philipp:

Just for allocation reasons, or…?

Capser:

Well, one bad outcome can be that, say a couple of the founders choose to leave and they among them own, say, 20% or 30% of the company. Then you have 30% of your cap table being taken out. So that’s a passive bonus. So you already have investors, so to speak: People who no longer contribute but who have also not paid any money into the company, only a large part of the cap table.

So due to the way of the shares in the company, due to the way the VC industry thinks about things, you have to hold back on giving out your shares so that you have more shares to give out as you raise more and more money in the life cycle of the company. So if you give away shares too early, then you can’t raise nearly as much money as if you don’t.

And it doesn’t matter that you can say to your investor, “That’s okay, we don’t care about this ownership thing.” The investors care about the founder’s ownership situation because they believe — and often are also right in — assuming that this really in the long run plays a strong motivational role. If you have lost a large part of your cap table early on in your life cycle, then you’re in trouble.

So it was pretty important to figure out if somebody wants to leave absolutely, how can we ensure that they don’t leave with a large number of shares. At least not any more than what is fair compared to their contribution. So this is perhaps the most thorny thing that we had to decide on at Nectar Connector in our founding session. But, actually, I think 80% of the time was spent coding. We are who we are! So coding in the day and drinking beers or sitting around the fireplace in the evening.

Philipp:

And your brother was sitting there thinking: “What the hell are these guys talking about?”

Capser:

Yeah! And that was the birth of Abzu. It just so happened that I had called the technology, the prototype, called it “LibAbzu.” So when you write a library in C, you often prefix it with “Lib”. So it essentially meant that it was called Abzu, right? So I had given the prototype the name Abzu.

Philipp:

Why Abzu?

Capser:

So Abzu is actually a Sumerian word. And this now points back to the book…

So to do the book, the book was actually a manifestation of my love for a certain ancient culture called the Sumerian culture, which thrived in Southern Mesopotamia, what’s now Iraq, from 4000 BC to about 1500 BC. And these were the people with the clay tablets and cuneiform writings on clay tablets, very informative. And they have played a very big role in the formation of what we think about today as civilization.

People tend to not know about them and think that perhaps the Egyptians invented writing and all the other things that Sumerians actually invented, but they didn’t. The Egyptians did not invent writing, the Sumerians did. And that’s perhaps one of the stories I wanted to get out with the book that I wrote. But anyway, I have a strong passion for this specific culture and I’ve studied it. I taught myself to read Sumerian cuneiform as a nerdy little hobby. And so I have a passion for that.

And “Abzu” is a Sumerian word that originally meant “subterranean water”, essentially with no religious connotations other than the water on the ground. But it also has a religious connotation as “the source of everything.”

So in writing this core library in which we would run the simulations of these finding paths, but really a graph search algorithm, I called that “the source of everything” in Sumerian.

Philipp:

A bold statement to make when starting a company.

Casper:

Yeah. Well, when naming a library, it’s perhaps not as bold. You just have to come up with something. And I think also the temple of the main God of Iridu, who is a main character in my book, is called Abzu. The house of Abzu. And so I guess the word just rolled off my tongue.

So I called the library or the prototype “Abzu”. And then it was actually Victor Galindo who said, “Let’s call the company Abzu.” I was against it and I can’t remember why, I just was. But “Machine Cognition Labs” obviously didn’t work that well either. So at the end of the day, we called it “Abzu.”

Unfortunately, what happened about the same time is that a small gaming company called Squid Games launched a computer game called Abzu. And it took us a couple of years to overcome them at least as being on the front page in search results when people searched for “Abzu”. And now we are.

Philipp:

50-50. And on YouTube, I think on YouTube, you were on the top. And then if you scroll further down, because there are only so many videos about you guys, obviously at some point that stuff pops up.

Casper:

Exactly. So we’re starting to be able to be found by our name. But that’s just a fun little interesting challenge that we have.

Philipp:

But there wasn’t a conversation via lawyers yet in between, or did you call them at some point?

Casper:

Not at all, because we actually trademarked the name Abzu and the guys with the game did not. I think they have now. But when you trademark a name or register a trademark in first Europe, as we did, and then later on in the US, you have to trademark it in a class. So that’s called what’s called a NIS class. The NIS convention agreed on this.

So we protected the word Abzu in the classes that make sense for us, which is any kind of computer consultancy and software, class 42 and 9 if you’re interested. And they did not. They trademarked the word Abzu in entertainment. So there’s no way we can battle them. We wouldn’t win that case. We don’t have prominence enough to say that we have claims to the name outside of the trademark classes that we can trademark ourselves. If that wasn’t the case, the world would very quickly run out of names for companies.

Philipp:

You came out from this coding weekend in the Spanish Pyrenees with a name and with a definitely-working prototype. Where everyone was just looking at, like, what the hell is actually going on in there? Like, it’s pure magic eventually.

Casper:

Yeah. Well, so I should probably admit that I wouldn’t really say that the prototype was working. It proved the concept. It proved that by using graph search algorithm like this one, you could find the most parsimonious, simple relationship, the simplest graph that could explain something. So I think with the prototype I’d written a couple of data points about names and ages, and the prototype quickly come up with saying, “Oh, what we have here is actually that each of these numbers must be the age of the people in this list.” So it was able to relate the numbers to the birth dates to the names of the people in the list by doing this efficient graph search of all possible experiments or graphs that could predict, that could say, “What are these numbers?” So that was what the prototype did.

Philipp:

And with that prototype, you went to VCs for money?

Casper:

Yes, that’s actually the prototype that went to VCs, and of course, also a business bank.

Philipp:

How would you explain it to them? Because not every VC has extensive knowledge of Feynman path integrals and graph search at the same time.

Casper:

For sure not. I guess I had this unfair advantage that the people in the VC industry knew me. I guess I could base the initial founding of Abzu on trust in me and the people I put together as a founding team. Otherwise, I don’t think it would have been possible because it is very rare — and typically not a particularly good idea — to start a company with a tech idea and not a product idea. That’s about as risky as it gets.

And particularly here, we didn’t really know if we could make this work. And I would say our early investors, it was fair for them to really question, “How likely is it that this half-studied physicist and his friends can actually build an AI algorithm that can outperform the big ones? Or at least do something that the big ones cannot and come up with the conclusion that that’s not particularly likely?” That would be a fair analysis. It would have been my analysis in many situations if I hadn’t known myself a priori. But some of these people did know me a priori.

Philipp:

It sounds like a high-risk, high-reward bet.

Casper:

Exactly. And I guess that’s why they did it.

If you want to do these kinds of investments, nowadays people are talking about deep tech investments, and this word has really flown to the surface. Everybody wants to do deep tech investments nowadays. Back then, this is four years ago, but that wasn’t even the case. Another word that often was used was “moonshot”, referring to Kennedy’s statement, “I want to put a man on the moon.” How likely is that? Well, they did it, but we wouldn’t have believed it. We were looking at the computers they had on that spaceship.

Anyway, we got that initial investment to go from prototype to working AI algorithm. And that took us about two years to do that. In 2018, we had the final, or the fully functional, algorithm running on a supercomputing cluster in a data center in Germany where we could really put in data and find rather difficult to find — actually: incredibly difficult to find — explanations in that data and prove that it worked by putting in data where we knew what the explanation was beforehand.

So, say, if we put in data about the orbit of a planet and then out comes Kepler’s laws of planetary motion, that’s a good thing, right? Because that’s hard to find in the data set. And it requires you to look at a lot of options because elliptic equations are not that straightforward, but it does work and it does find it.

Philipp:

Interestingly enough, there was a paper a month ago, or months ago, which exactly did that with graph neural networks and then some symbolic expression for, I think, Newton’s law and then not respectively Kepler.

Casper:

Yeah, yeah. So there’s another thing called “Feynman”. So we call our library, the one you use to interact with our technology, we call that Feyn® after Richard Feynman. But — fortunately, unfortunately, whatever — another group of people have done another symbolic approach, which actually does use neural networks under the hood, that is called AI Feynman. So here’s another name clash.

But that may be the paper you’ve come across, because the way they really prove it is by taking all the Feynman equations and generating synthetic data corresponding to all equations from Feynman’s lectures of physics. And taking all those data sets that are synthetic, then it’s proving that the AI Feynman symbolic regression engine can find it. But so can our technology. That’s easy. It’s not actually hard to find those.

What’s much more difficult to find is the kind of thing that we work on nowadays where you’re looking for, say, genetic fingerprints for certain diseases or understanding why certain cancers grow in a certain way. But maybe I’m getting a little bit ahead of myself here, because what I really want to say is: Referring back to 2020 when the algorithm was done, we had this pretty powerful supercomputer that we needed to run the algorithm on at the time, so it could find these rather complicated explanations and data.

This is the moment where we knew that we had struck gold, because now we had an algorithm that finds explanations and that works and competes in terms of performance. After all, it finds the correct explanations as with the black-box models, neural networks, and so on. But, actually, if we know what the real explanation is because we generated the data ourselves, it also finds that explanation.

So that was a golden moment, I would say. And that was the moment where the initial investors could, I guess, breathe a sigh of relief, because now it was no longer a tech risk. Now it became a product risk. What are we going to do with this awesome new technology?

So if you have invented an engine, nobody buys engines — people buy cars, right? So the next thing that has to happen when you have a technology like that is to figure out: How are you going to take this to market?

So here starts the journey that is the first step for normal startup companies. But, of course, now us having a unique advantage in this technology gives us this market benefit. So that’s where we found ourselves in April 2020. We also found ourselves in a world where COVID had just happened. So it was, I think yesterday, one of my investors asked me: Do you think that hindered your growth? Of course, that’s a what-if game.

Philipp:

Where you don’t know the other option.

Casper:

Exactly. So I can’t really say that, but what I can say is that I know what it meant, because there are at least two concrete things that happened before because of COVID.

One was that: When we started to think about what to do with the technology, we created a list of things to do. We wanted to do something in finance analysis. We wanted to do something in insurance. We wanted to do something in predictive maintenance. And we wanted to do something in time series forecasting in the general sense. And we wanted to do something in pharma. And I think the pharma thing probably wouldn’t have been on the list if it hadn’t been for COVID. None of us in the company are like that. We are mathematicians, physicists, computer scientists, those kinds of people.

Philipp:

And by now, a lot of bioinformaticians are as well.

Casper:

By now, a lot of bioinformaticians. But by then: Not

So we went pretty systematically about it. Okay, we have the technology. There are a ton of things, actually, that this technology can solve problems for. The problem of explainable AI is very big in a lot of verticals. But it also takes a while to get traction or a foothold in an industry, so you better choose wisely when you choose what to do first. It doesn’t mean it’s the last thing you do, but the first thing you do needs to be wisely chosen.

So we spent actually the first year just piloting out what happens if we use this in these different verticals. What would a business model look like? How are we going to make money? Can we sell this to people who want to listen to us when we come and talk about this awesome new technology? And the place where we found the most traction was in the life science space.

And reflecting on why, well… So how do you demo it to potential life science customers? Because you obviously can’t go there with the hard mathematical details, which might be too much and just not necessary. But remember: Customers don’t care about that. Customers care about outcomes. So if you’re a scientist — like my close collaborator Michael Kristiansen from the State Serum Institute — if you’re that guy, what you care about is understanding the diseases you study, for instance. And he studies a lot of diseases in pregnant women, one of them being preeclampsia, for example.

Preeclampsia is a disease that hits pregnant women in the third trimester, and it’s actually rather severe and causes very high blood pressure. And it unfortunately often kills the fetus. It’s not a good thing. So that’s what he studies. He had a lot of data about how these women look from a, say, measurement perspective: Blood samples and urine samples and so on. And then putting that data in and out came some very simple, elegant models that said: “Well, there’s a certain relationship between these two hormones that seem to be in this certain way for the women who develop preeclampsia.”

And looking at that model, not so much in the way the algorithm predicted, but what impressed a person like him is that, “Oh, I never knew that this hormone plays this role in that, in this disease. But now that I know, I can actually understand why.”

So suddenly his… It is retroactively understanding based on the model it spits out.

Philipp:

Exactly. But doesn’t that infer some confirmation bias?

Casper:

Yes, yes. And that’s why this is a hypothesis. It’s a step in the scientific method.

So what we can do with this is that we can tell you how likely is it to be, say, a spurious thing. It doesn’t actually correspond to real data. So once you have a closed-form mathematical formula, you can also calculate the probability of it being an accident that it fits the data to the accuracy that it does. So it’s equivalent to what scientists would call a p-value, but it isn’t quite a p-value.

It’s actually, we use different criteria to calculate the probability of this being luck. And here, being lucky in the sense that it actually doesn’t match the data. And that means that it’s one of the knobs you can turn on when you use our technology, the QLattice. One of the knobs you can actually turn on is your risk willingness in getting models that might not be true.

But anyway, when you get the model out, the entire point is that the model is a white box. You can see what it says. And that means that you can relate that to the information you knew beforehand.

So let’s take the preeclampsia case. Out comes a hormone that’s called leptin. Leptin is produced by the adipose tissue in the body, fat tissue in the body. So having that means that if you have a high leptin level, you’re also very likely to be obese. And it is well known that obese women have a much higher risk of developing leptin. Actually, obese people have a much higher risk of developing any type of hypertensive disorder. So the fact that the model picks up this hormone doesn’t seem controversial at all. But if it had picked up, say, the zip code of the woman or some other silly thing, then the researcher would immediately say that’s unlikely. So I’ll just discard this hypothesis and move on to the next one in the list that the QLattice produces. And that’s perfectly fine. That is the scientific method: You get a hypothesis out, and then you choose to believe if that hypothesis is true. The next step is, of course, is to prove that it is true. But that doesn’t matter whether it came from a machine learning algorithm or whether you popped it out of your head or whether it came out of QLattice. It’s not true until you’ve proven it to be true.

So what you do with the hypothesis is you make new predictions and then you go out and try to falsify those predictions in real life. But it’s impossible if the model is a black box. What are you going to falsify? You don’t know what it is that your model is actually saying. Whereas Michael, looking at this model, is saying, well, the higher leptin level and the lower leptin soluble receptor level, and that combined with a very unfortunate sub-range of a hormone called resistine, which indicates that you have an autoimmune reaction, this is what the QLattice assess. So this specific thing is something I believe in. And to test it, I can go out and look at women who have preeclampsia and confirm that they tend to have this specific level of resistine associated with this autoimmune reaction and thereby gradually build up evidence for the hypothesis. At this stage, it doesn’t matter where the hypothesis came from.

So this is returning a long, winded way back to your question. The researcher using our technology does not care why the QLattice works. They care about whether the hypothesis that comes out of the QLattice can be put through a scientific rigorous process because it’s the hypothesis that matters.

I spoke with another guy at one point who says it’s kind of like putting on a thinking hat. The QLattice doesn’t actually answer the question. It helps you come up with potential answers to the question. You — in this case, a scientist — still have the bonus of taking that hypothesis and proving it. And in some situations, you can just risk it.

You can say: “I’m going to believe this because I’m not going to use it as a research publication.” In other situations, you have to accept that it’s just a hypothesis and you have to set up a new prospective study of some kind. A lot of our customers today use it in drug development, where what you’re really studying is the properties of molecules.

A crash course on drug development, on the pharma life cycle, is: First, you have some kind of thing in the body, a gene or a protein or a virus or something, you want to interfere with. That’s called a target. You can have a lot of ideas about how to design a molecule that would actually interfere with that target, depending on what the target is. If it’s a gene, it can be a therapeutic approach. If it’s a protein, you can try with small molecules or different other approaches. But anyway, you’re trying to find a molecule that will do something to that target.

When you get to that step, it’s really hard to figure out what actually defines whether a drug will work or not. So one thing is, “Will it bind to the target?” That’s fair. But, “How efficiently will it do that? How quickly will the body get rid of it? Will it have side effects? Will it be toxic? Will it work for everybody? Are there subpopulations for which it’s dangerous?” And so on and so on.

All these questions are very hard to answer with a new molecule. But given the QLattice, you can actually get a hypothesis about what actually causes, say, toxicity in a drug. So suddenly, you can choose the design of your molecules to design ones that you, based on these hypotheses, believe will not be toxic. And then this is not really such a scientific endeavor. You’re just increasing the likelihood. You have to prove it anyway during your clinical trials. That’s how drug development works. Eventually, you have to test this first on animals and then on humans. So if you trust the model, it doesn’t so much matter whether the rest of the world trusts the model.

So that’s definitely by far our biggest use case. People study data about molecules and the way they interact with the body to understand the interaction so that they can better design molecules that do or do not do the things they want these drugs to do. So what we have here is actually people with a very strong scientific mindset who cannot use black-box machine learning; it won’t do anything for them. They don’t want to predict whether a molecule is toxic. They want to understand why it’s toxic, in order to not design toxic molecules.

So understanding is important because they want to intervene. They want to change something. And you can only do that if you understand. So you have people with a scientific mindset who care about explanations, and who also have money to spend because this is an industry where it’s a high-value game. So what we found here is an industry that is willing to actually risk doing business with a small Danish-Spanish startup company just because…

Philipp:

Why are you in Barcelona as well?

Casper:

That’s because some of the founders are from Spain.

Philipp:

Okay. So the people…

Casper:

Actually, the majority of the founders are in Spain. Of the original seven people, there were three Spanish, an Italian, and three Danes in the group. So in the beginning, we were just as much a Spanish company as a Danish company. That was one of the other things that changed during COVID. The lockdown in Spain was so much more severe that it meant that our growth in Spain was really hindered. So everything that happened during, at least the early parts of the period of the COVID pandemic, was here in Denmark. So that really hindered the growth of the Barcelona office.

Originally, the intention was always to have these two, equally sized legs. But the core reason is that I set out to find the best people I’ve known and that I’ve gotten to know in AI and high-performance computing. And they just didn’t happen to be Danes. And then we agreed on these two locations. And in the beginning, it didn’t matter so much because it was just a small room at a co-working space that we rented. But nowadays we have a pretty big room at a co-working space in Barcelona. And then we have this office here where we’re sitting where the majority of the Abzoids, as we call ourselves, I think we’re 25 people here and six people in the Barcelona office.

Okay. Yes. So anyway, that’s life science. That’s why we went into life science. And it was interesting because none of us knew anything about life science. I barely knew what an RNA molecule was. Of course, I knew what it was, but you know, not really a real connection with that. And we had mathematicians and high-performance computing software engineers and so on, but nobody knew. So we just went in blind. But we quickly learned and did some interesting projects together with some pretty big pharma companies. And then we started hiring people who could really fill the gap and be the domain experts in the space.

So nowadays we have some of the strongest thinkers and practitioners in several fields within pharma and life science, particularly in drug development, and particularly in the field called RNA therapeutics. And of course, mRNA vaccines are one of those kinds of drugs, which is the COVID vaccine that we all know and love. Currently, if you look at us from the outside, we are perhaps one of the strongest powerhouses, at least in Denmark, perhaps in the world, in understanding the properties of RNA molecules in a drug capacity, both because we have these people that we’ve hired, but also because we have a technology that allows us to learn and learn and learn and learn how these molecules actually react and interact with the body.

Philipp:

I learned earlier that you matched your lunch times to Spain.

Casper:

Yeah, it’s taken some getting used to for some of our Scandinavian colleagues that we eat lunch at 1:00PM, and we all eat lunch together. And we have a screen over here so we can turn it on and look at each other. We use that a lot in general. And that’s part of our culture.

Philipp:

So everything screams transparency, essentially.

Casper:

Yes. I guess it’s a theme in my way of thinking, transparency. Why do I care so much about building transparent and clear and explainable AI models? Well, perhaps because I just care about transparency and Abzu is a company that is absolutely focused on transparency. We don’t have secrets internally.

Philipp:

You have a little blog article on your website that you’re a…

Casper:

“Teal” is a term that some people apply to this self-managing organizational culture that is getting more and more popular, where you really trust people to make all sorts of decisions on their own and to figure out how to organize and work groups and teams and so on, on their own. So there’s no authority in Abzu.

There’s a certain grouping of people who think a lot and write a lot about this way of organizing. And they’ve given it the name Teal after a book called Reinventing Organizations by a Belgian writer. And he is certainly one of the people who has done a lot of thinking and a lot of good writing on self-managing organizations.

Philipp:

So essentially everyone is… Well, it’s a bit like this dichotomy in school when you’re 18 and you technically still, depending on the school, depending on the age you grow up, have to ask if you’re allowed to go to the toilet.

But at the same time, you’re supposed to make this massive decision on what to do next, which will essentially have a major impact on your life. And it seems similar to a normal working environment in life. Your working hours, everything, is somewhat given, where here people are not technically allowed to do as they want because there’s still some common goal that everyone has to work forward to.

Casper:

Yes, but it is in the words, right? They are allowed to do what they want because we trust them to not just do anything. So self-managing doesn’t mean no management. It means that we rely upon and expect that people will manage themselves. What doesn’t work in a self-managing environment is if people think that it’s about no management and doing whatever they feel like today.

Everybody here, it’s not like nobody’s a manager. It’s more like everybody’s a manager. And that’s an important distinction. And I think that’s something that people who think that this kind of approach will degenerate into chaos, which it clearly doesn’t. Some of the most successful companies out there, I can name a company like Netflix, as an example, are actually very deep into the self-managing culture without embracing it to the level that we do. But it certainly isn’t a fact that you can’t become big just because you’re self-managing. But it does require that you realize that it’s about training and coaching and educating people in management. So they have to actually do that. They have to set goals. They have to ensure that these goals are aligned with the corporate goals. They have to work together to find corporate goals. They have to think and reason about what happens if these goals are not met. So essentially, everybody here has to take full responsibility for the future of options.

And they can only do that if they know as much as anybody about the current state of the option. That’s transparency and self-managing cultures placed together in that sense. If you make decisions in the dark, you’re about to make that bad decision. So a lot of people think that people can’t make decisions for themselves in a corporate setting. Actually, I know from experience that people do make bad decisions. But maybe the root cause is not that people are stupid, but that they are not given enough information about the current state of affairs to make the right decisions.

Philipp:

I would also assume that it attracts the kind of people that can do that to begin with.

Casper:

Yes. If you are the person that wants to be told what to do — which is by no means bad, it has advantages and disadvantages — then you probably wouldn’t be interested in working here to begin with. No, you probably wouldn’t apply for a job here, and you probably wouldn’t like it if you did. Because with responsibility comes a certain level of worry and a certain level of dedication that I by no means want to make into a merit in its own right.

There’s certainly a perfectly valid way of thinking about your life, where your work life is just something you do because something else matters to you. I think to almost everybody, something matters. It can be that it’s your hobby. It can be your family. It can be your riding, your hiking, your bicycling, whatever. But almost everybody I know, at least all happy people I know, have something that they really care about. But that doesn’t have to be their work. And if it’s not at all their work, but the work is really a means to gather resources to do that other thing, then a self-managing environment like Abzu is probably not going to be particularly nice for you. You probably do want to just be told what to do or do a fairly simple thing. So you can get the money and the resources to do that other thing that you really care about, which is absolutely fine. But it’s not going to make you a happy Abzoid. So that’s the reality of it.

I don’t think that there is a future where all companies are self-managed. I think there are always going to be some people who want to go into a more hierarchical structure and just play their role in that context and then take their creative energies elsewhere. I don’t know about the future of Abzu. I don’t know what we will look like organizationally, culturally, five, ten years from now when we’re a lot of people. But for now, at least, we are a bunch of people who take this very seriously and who really care about Abzu. And therefore, it is the passion of a lot of people. And that has a lot of interesting effects.

We are the best of friends. We have a very strong network internally because we care about the same things, essentially, since our hobby is the same thing. It’s Abzu. All of us have other hobbies as well. I have this crazy Sumerian culture thing. Other people play the guitar. But I think almost everybody here would say that Abzu is probably one of the biggest passions of their lives. Everyone has something. Everybody has their hobbies. And you’ve chosen this one. Probably have others as well. I imagine your studies probably also matter to you. And that’s it.

So that’s how Abzu works as a company. And what it does do is bring out extreme amounts of creative energy. Without this approach, we couldn’t have built the QLattice. I can say that for absolutely sure. If you try to put QLattice development into a rigorous top-down hierarchical scheme – if you do that and have some boss figuring out what everybody should do now – that boss would have to be a genius beyond human capabilities to achieve what we have achieved. So that’s not the way to do it.

Philipp:

The next question is not necessarily meant to, I don’t know, for lack of a better term, question the integrity. But everything is about transparency.

Yet I couldn’t find the explanation of like how it actually works. Because you said earlier before we started recording that, well, it’s basically you against the giants and you would want to keep it from them as long as possible. So you have some sort of market advantage in the next one, two, three, four, five years.

Casper:

Yes. So the way I see that is there are two classes of secrets.

Some secrets are ours in a grouping, Abzu secrets. If you think about it as a family, then it’s a family secret. And then there are secrets that other people own, but which they share with you and therefore have a right to decide what you can do with. That’s like our customer secrets. They tell us sometimes things that we are not allowed to share freely in certain ways. So they apply restrictions on those secrets. So the secret is a thing and they’re valid and they’re necessary and they belong to somebody. And the people who own those secrets choose how to share them.

The thing that we have chosen in Abzu is that all the secrets that belong to us are freely shared inside of Abzu. But it doesn’t mean that they’re necessarily freely shared with the rest of the world. For instance, for now, at least everybody in Abzu, we have a spreadsheet where you can see what salary everybody has in Abzu. We have not shared that spreadsheet with the rest of the world. We’re actually talking about it. I don’t necessarily see why we wouldn’t, but people could have sensitivities about that. So we have a lot of people who own those secrets, sensitivities about that. So it’s not an option to keep it secret internally, but it is an option to keep it secret externally.

So if you think about it like that, secrets belong to somebody and they choose who to share them with. So I’m not a transparency fanatic in the sense of say, Richard Stallman and “information wants to be free” and secrets shouldn’t be allowed in private life.

Philipp:

But you could apply for a patent.

Casper:

We are applying for a patent. And if and when we get that patent, we may consider publishing. We probably will.

Philipp:

My personal biggest struggle with it is that I really want to understand it and I couldn’t find the paper.

Casper:

And then you have to apply for a job. That’s a way forward here. If you really want to see the source code, then you have to apply. For now, at least the source code of the QLattice is a secret that is fully freely shared inside of Abzu because we own it, but we have good reasons.

And I must say my open-source heart, I was a part of the Linux community, I’ve been a part of research communities in a lot of contexts, I live and breathe open-source software. I really believe in that philosophy, I’ve made contributions to a ton of different open-source projects across all sorts of things. And I still do. So I don’t really like the fact that we have to keep it secret. It also hinders us in terms of proving that it actually works and we meet some kind of skepticism now and then where it would be a little bit easier if we just say, look, but it would also destroy us.

Philipp:

Yeah, because for me personally, if I were to use it, I wouldn’t know how, because I just do completely different stuff and wouldn’t see how it would help. I personally would choose not to use it just simply based on I haven’t gone through the math and I don’t know how it works.

Casper:

Yeah. And that is certainly an attitude that we meet and that I understand and that is holding us back. So it is the collateral damage of the way things, at least, unfortunately, for now, must be.

And it’s not normal. I wouldn’t say you should normally not be afraid of your competition. Actually, I’m with Eric Ries, the guy who wrote Lean Startup, on this. If you really are so scared of sharing your best ideas or people stealing your ideas, then try the following: Take your second best idea and dedicate a year of your life to see if you can force somebody to steal that idea. And you will find that’s incredibly hard. People don’t tend to steal ideas. They tend to get their own ideas and run with those. So you don’t have to be that worried. But there are situations where it certainly is the case that the idea will be stolen or applied. So “stealing” is a strong word.

Let me give you an example. So most people who work in the startup system have become familiar with a company called DeepMind. DeepMind is similar to Abzu in that they also came up with a different approach to a certain kind of machine learning problem in the reinforcement learning space and they figured out a way to use deep learning inside of a reinforcement learning algorithm. So that’s what DeepMind invented.

But remember at the time when DeepMind did this, when was that? ‘13, ‘12, or something like that? Deep learning wasn’t really that big. And if you have a big decision space, in this case, what DeepMind did was they just tied into the story of what it tells. So what they actually started playing with is figuring out how to teach these reinforcement learning algorithms to play Atari games, these old console Atari games. So that’s how they developed the algorithm.

To do that, you have to have a neural network that can actually process a picture as input. And here we’re back to Yann LeCun and the idea of even making deep learning that can work on pictures because the input space, the state space of the system you’re trying to learn about, is rather big. In this case: A picture. But in all other reinforcement learning settings, it tends to be rather big. This is why reinforcement learning with queue tables works only if your state space is very small for some simple games and so on. Whereas if you have a very big state space, like the space of all screenshots of a computer game or all possible states of a chess game and so on, then it’s not possible to fill out the table. So you have to have a standing thing that can match the table. And this is what you’ve read about.

But fundamentally, having a neural network be able to process that state space is not that old. It’s like neural networks that took that big data set as input was not really working 10 years ago, at least 12 years ago.

Philipp:

We didn’t have the computational capability.

Casper:

Yeah. And some of the core, crucial – a lot of innovations, like deep learning – it’s a pretty much thrown-together list of a lot of neat tricks that make it work. There are a lot of issues and then people come up with their clever little tricks and tweaks over the years actually make these things work in practice. So more than a single genius idea that made deep learning finally work is like a long list of little add-ons that have improved aspects of deep learning, which also makes it a very complicated technology to work with, I must say.

But anyway, this was not possible 10 years ago and DeepMind made that possible inside certain problem spaces. So they did a lot of research on the deep neural networks themselves, but also on how you fit that into a reinforcement learning loop. And that was their invention. That really was Hasselblad’s core idea. And they made that work. And they proved to the world that it can work by playing Atari games. And then they published the algorithm.

And within a couple of months, Google, Facebook, Microsoft, Tencent, and Amazon were all fiddling with deep neural networks in the reinforcement learning loop for self-driving vehicles, for all sorts of other things that had a high-profit value. And here sits the inventors having built the technology, having proven it works on Atari games (hard to commercialize that), and what to do now.

So their situation was — I don’t know them, I don’t know what they felt like — but I would imagine sitting in a board meeting saying, “Oh shit, this is not fair. We built something incredible. It’s going to radically change technology. And Google, Microsoft, Tencent are going to run with all the benefits of this.” And that would have happened. That did happen. So they didn’t have any choice. They had to sell the company to Google, which they did. Moderately good exit, $600 million, at least the rumor has it. It’s not a bad exit, but it’s also not fair in a way. It’s actually a sad outcome for people who really invented something as groundbreaking as that. At least that’s how I would feel if it was me.

And now it is me. So how do I avoid that? And the way I avoid it is by biting the bullet of secrecy for the algorithm, at least until the patent is in place and we have established a strong foothold in the market that’s more profitable than playing Atari games. And for us, that’s life science in the pharma industry. So yes, the price of that is that there is a certain category of researchers who do care about how the QLattice came about with the hypothesis and not just about the hypothesis itself. You are one of them, perhaps, who will not use the technology. And that’s fair. That is just the way it will have to be, at least for a while.

Philipp:

And if that changes, I will be one of the first to download the paper and go through it.

Casper:

Yeah, I would encourage you, though, to try it out. I mean, the hypothesis that comes out is fun and interesting in its own right, close-source or not in origin.

Philipp:

Reading about it or just going through the website gave me a few ideas. Some of them are basically exactly, okay, I can do a graph search with all of that and then…

Casper:

Yeah. Yeah. So the wider graph search idea, that’s a different topic. But for now, we’ve focused on searching a certain graph space, which is the space of all mathematical equations or formulas. And that’s just because there are a lot of things to be done in that space.

But we could also search logic networks. We could search algorithmic problems like, say, the traveling salesman problem. We could search semantic problems, where it’s really semantic graphs of research, and so on and so on. Just haven’t gotten around to it yet. But we will as we grow and raise more money and do what’s in the future for this company.

Philipp:

Speaking of the future, besides the patent that’s hopefully coming soon-

Casper:

Yeah, it’s pending. Let’s see.

Philipp:

It’s pending. What is going to happen in the next year, two years, besides the bioinformatician life science space?

Casper:

Yeah. So the technology has horizontal applications. We’ve essentially solved, at least part of the black-box problem of AI, which ties into, say, the trust in AI.

Do you want to use a model where… Let’s take an example. A lot of people assume that self-driving cars use AI. And that’s also partially true, but it’s also not true. Because the AI in a self-driving car is for image processing, the other kind of input data is to detect objects and the state of the world from sensory data. That’s what AI is actually used for today. Once that is done, it’s passed onto a closed-form human handcrafted equation that somebody wrote that makes decisions about when to turn when to break when to do what, because these equations need to be white blocks. For regulatory approval and to understand what they do, when a car breaks, you have to have an equation that you can study. This is what it does.

This is an example of a place where the equation really matters or the explanation that the equation covers really matters. Those kinds of areas are completely closed off for current generations of machine learning because they don’t deliver that. But we do, because we deliver that kind of equation that a regulator can then look at and say: I approve. It breaks at the right times. It may be optimized to be the safest possible equation. I couldn’t have told you as a regulator, but what I can tell you is that I like the way it breaks. That is what you can get out of this approach.

We haven’t chosen the next vertical. We’re thinking about finance. We’re thinking about insurance. We’re thinking about the automotive industry that I alluded to here. We’re thinking about manufacturing, predictive maintenance, that space where you really do not necessarily want to predict when a machine breaks but understand why it breaks. Those are pretty much the verticals we go after, and we haven’t made up our minds yet. It’s not something we’re going to do this year, but at some point, certainly up to when life science will be a running and going business, and then we’ll start to think about what to do next in the next vertical.

Philipp:

Thank you so much for listening to this episode. If you want to hear more, subscribe to Deep Tech Stories wherever you listen to your podcasts or follow me on Twitter. There will be no episode over the holidays, so you’ll be hearing back from me on the 13th of January with a brand-new episode.