Modern AI in particular, so-called deep learning is everywhere. Alone in the last few months, a new generation of image generation based on text prompts emerged that is arguably on par with human artists. However, in many crucial applications, the black-box nature of such models is a problem. If you want to let the AI drive a car, no regulator or decision maker will allow it. The same is true for any applications that deal with scientific understanding. You want to understand why the AI comes to its prediction and which parts of your dataset are essential.
But there is a different approach to AI that would be able to solve this black box problem. So-called symbolic regression, instead of obtaining several million or billions of parameters, which make up your model, you would get an equation that the person can understand just by looking at it.
The problem is symbolic regression is insanely hard to do. In fact, it is so tricky that no one has managed to use it for anything than the most basic toy models. No one but a small Danish startup called Abzu, which just received a 2.5 million Euro grant from the European Union to apply their symbolic regression algorithm to drug discovery.
Abzu's CEO Casper Willstrup an avid Coder, ever since his childhood, initially came up with the idea for Abzu, the QLattice, during his physics studies in the mid-90s. Ever since he came up with the idea to solve symbolic regression, his career took many twists and turns, all leading back to him studying physics and getting his first computer when he was nine years old.
If you enjoyed this episode, why not recommend it to a friend?
Podcast Twitter: https://twitter.com/deeptechpod
My Twitter: https://twitter.com/sturmerph
Music by Nathaniel Drew x Tom Fox:
Design by Amadeus Schwed:
I really thought a lot about what is it to make startup companies great. And I think I’ve learned a lot there.
What was it that you learned?
Well, there’s a trivial and perhaps often overemphasized thing, but it probably isn’t overemphasized, and that is: It is about people. So if you go in and assess a company that has the stupidest business idea made by the cleverest people, they will have a much higher chance of just succeeding than the most excellent business ideas by people who are not as qualified at building a company.
Welcome back to Deep Tech Stories, a podcast making creators, entrepreneurs, and idealists in the deep tech space accessible by highlighting the stories and pulling the ideas from the lab into the real world.
I’m Philipp Stürmer, and on the show today: The first part with Casper Wilstrup and the road to coming up with a new kind of AI.
Modern AI in particular, so-called “deep learning”, is everywhere. Alone in the last few months, a new generation of image generation based on text prompts emerged that is arguably on par with human artists. However, in many crucial applications, the black-box nature of such models is a problem. If you want to let the AI drive a car, no regulator or decision maker will allow it. The same is true for any applications that deal with scientific understanding. You want to understand why the AI comes to its prediction and which parts of your dataset are essential. But there is a different approach to AI that would be able to solve this black box problem. So-called “symbolic regression”, instead of obtaining several million or billions of parameters, which make up your model, you would get an equation that the person can understand just by looking at it.
The problem is symbolic regression is insanely hard to do. In fact, it is so tricky that no one has managed to use it for anything than the most basic toy models. No one but a small Danish startup called Abzu, which just received a 2.5 million euro grant from the European Union to apply the symbolic regression algorithm to drug discovery.
Abzu’s CEO Casper Wilstrup, an avid coder ever since his childhood, initially came up with the idea for Abzu and the QLattice during his physics studies in the mid-90s. Ever since he came up with the idea to solve symbolic regression, his career took many twists and turns, all leading back to him studying physics and getting his first computer when he was nine years old.
I got my first computer, I believe, in early 1980. The way I got it was because a friend of mine showed up at my house in the countryside one day. I was about 9 or 10 years old. And he brought a computer that I didn’t know at the time. I didn’t know what a computer was. But it turned out the computer was a Sinclair ZX81, which is a very primitive machine. One kilobyte of RAM, it was a Z80 CPU at, I think, 2.2 megahertz, very slow, very slow. And it had a lot of weird little quirks.
You connected it to your TV through an antenna cable, and then you code a little bit there, as much basic code as you could put into one kilobyte of RAM, that’s not a lot. That was not my first computer, but the thing is, he left it with me because he was bored with it. So it became mine, and I used that for a couple of years. And then a couple of years later, I actually bought the computer that I think of as the beginning of my passion for computers. That was the Sinclair ZX Spectrum. This was technically not my first computer. It’s the first computer that I actually owned, but it’s not the one that I started with. But it’s the one that was powerful enough to do anything for real.
And that’s where the passion really started.
I mean, by today’s standards, it’s basically just a pocket calculator.
Yeah, it actually has the Z80 CPU. It’s a Zilog CPU that sits inside of it and is way weaker than what you’d have in a standard pocket calculator nowadays. It is a really old-fashioned CPU, but it is a CPU, 8-bit, and a full instruction set. And it comes with this quaint little basic interpreter that was built into it by Sinclair Research Limited that allows you to code a little bit once you power it up in BASIC. But even the BASIC is also quite limited. So what really quickly happened, at least to me, was I figured out that you could actually code it in Assembly. So there’s no direct way to code this thing in assembly. But that is actually not that hard to code in. It’s hard to make a big, complicated program. But it’s not hard to make a program that writes, “Hello, world. I am your ZX Spectrum” on the screen. And I can say, “Who are you?” And then the user can write, “Casper,” and then say, “Hi, Casper, how old are you?” And those kinds of programs. That was the kind of program that I coded for the first, say, half a year.
What was the fascination there when you were 9 or 10, and then brought the computer?
Yes, I was reflecting on that actually yesterday, because I posted something on its 40th anniversary. It was a very strong feeling.
And it’s a little hard, I guess, to express to people who’ve grown up with computers. But I remember this, no way, this can’t be true. I typed something on a device that looks like a typewriter. And then that gets shown on my TV screen, which cannot normally show things that I type. And then I can actually run that program, and it does something. And it can interact with me almost as if it was alive.
And I think I felt the Alan Turing-style fascination, that a computer can trick you into feeling that it’s almost alive and that you control what it can do, and you can bring any potential out of it. So it was a very strong feeling of creative power.
It’s probably hard to recognize today because all that I just explained here is you wouldn’t even think about how crazy that is if you’ve never seen it. But for a kid who had never ever interacted with a computer, a TV was a one-way thing that showed movies, not even that particularly often, because, in 1980, Danish television was not all that exciting. And suddenly you had this completely new modality.
It opened up a different kind of world to me that I just didn’t believe existed, and it seemed miraculous. So that got me hooked. And it also got a lot of other kids my age hooked. I wasn’t the only one. We were like, in every class, at least in my school, there were two or three of these early adopters of this generation of 1981 to 1985, CK computers that came out, where the Sinclair Spectrum was the first big one in Denmark, and I believe in entire northern Europe. But on its yield scale came the Commodore 64, which was probably a lot more popular. It was better for gaming. This one, my luck, I guess, you can’t really buy it. You never could buy a good gamespot, because it’s not good at gaming. It’s too weak, and not enough colors on the screen, and it can’t make sounds. It’s too primitive.
So that meant that, and that’s something that I’ve also reflected on myself, that meant that as a kid, I quickly realized that the games I made were no worse and sometimes better than the games that I could buy in my local computer store. So suddenly you had the system where it was feasible for, especially when I was a little older, 13 or 14 years old, it was feasible for me to create computer games and computer programs, in general, that was as good or even better than what people were actually buying. One example, I remember, was probably in sixth grade or something, another friend of mine, a guy called Thomas, was also an avid coder. And we had a competition where we gave each other one week who can make the best game. And then we just went home and we coded, and we met after that weekend and compared games.
How did you learn how to code when you were 9 or 10? You just went to your parents or went to the public libraries?
Books on BASIC and Assembly?
Books on coding and the ZX Spectrum started coming out. So there actually is a fairly good manual to the ZX Spectrum that teaches initial basic coding, but nothing in terms of assembly. I can almost even remember what the books looked like.
Relatively small books by nerds for nerds that just described the Z80 assembly language,
And your parents were thinking, what is he doing? What is Casper doing? He’s going to the library every day.
Well, not every day because you also have to read the books. But I think my parents were a little bit mesmerized by what in the world was this kid up too. Also because I probably deviated a little bit from what — I never really played computer games in that era, it didn’t really catch on — I was much more into doing things and I still have that thing in me that I don’t really like to consume. I like to produce. And that goes across anything and it’s not even some, it’s not a moral stance or anything, it’s just how I am as a person.
How did this urge of producing play out in school then? Because if you were like, “Please read this book,” and like, “Really, do I have to?”
It wasn’t like that. I can certainly enjoy a good movie or reading a book or listening to music. And I was a good friend of school. I never really minded going to school. I was a little bit lazy. I guess everybody who probably is didn’t do much homework. Going to school was fine for me.
And then somewhere along that point you decided to go into physics instead of going the common route of computer science.
Yeah, but that wasn’t the common road at the time. Sometime during my grade school years and probably late 80s, I started to think more deeply about what I actually wanted to do when I grew up. But computer science wasn’t really a thing. You could actually study computer science at the time. The Computer Science Institute at Copenhagen University had just been created out of the Mathematics Institute. But it wasn’t really clearly a career thing. Computers were so new in the conceptualization of people.
I didn’t, and this is a little detour, but I think even today, it’s still not entirely clear what you do when you do computer science. Computer science is a tool that you use for something else. So if you do a software that is used by the insurance industry, why don’t you call yourself an insurance analyst? If you do a software that’s used by physicists, why don’t you call yourself a physicist? And so on. But nonetheless, it has become a trade and a career route in its own right. But it wasn’t clear at all in the mid-80s that it was something you could study. That would be like studying paper because you needed to write or study.
I mean, in the end, it’s kind of just discrete mathematics.
Yes. But in the end, it could make a lot of sense to study mathematics, which was certainly also among the things that I considered. So it wasn’t clear, even when I started at the university, what I actually wanted to do.
Did I want to be a physicist or a mathematician? So it was certainly because I felt that computers were core to what I was good at and something I really wanted to use that I went in that direction. But it didn’t occur to me to study computer science as such. So that meant physics and mathematics.
So you were in your first semester, I guess you picked some courses in maths or you had the mandatory courses in maths that everyone has, and then the physics courses and then maybe a bit of chemistry.
No chemistry. So I ended up taking on the first year, and I ended up taking what’s called Physics I, which is a third of a year. And then I took Mathematics I, which is a third of a year. And then I actually took Computer Science o was what it was called, which is also a third of a year. Computer scientists have to be different, right?
So they were numbered from zero and upwards, and the other courses were from one and upwards, but whatever. And that was my first year. So even after the first year, the options were still open to either a master’s in any of those things. But then after that I had to pick, and then it became physics and mathematics. So actually, that’s what I did on what would be called the undergraduate part nowadays.
So computer science was pushed to the side and I took some courses, but only to fill the holes in the curriculum. So it was more like a thing that I did as a physicist, as a physics student, than it was something I studied. But it also became very much that. This is where my life really starts taking the direction it has followed ever since. Because even as a first-year student, I ended up at that time, nobody could code in the physics environment. A few people at least know a little bit of Python nowadays. So that time I’d say about 1 in 10 professors could do Fortran, and then 9 in 10 professors
could do nothing. They couldn’t code at all. So, nonetheless, it was clear to everybody that this was a useful tool for physics, not only experimental physics, but generally for a lot of things within physics.
So people like me, still a relatively small group who were actually really good at coding, got pulled into a lot of things. I was pulled into the tandem accelerator laboratory down at Riesø. And I sat there all alone and coded the control software for some of the modules in the tandem accelerator. So this is weird just to think about, right? I was in my second year, maybe in my first year, I don’t quite remember when I was asked to code the software that controls the tandem accelerator. That wouldn’t happen today. But that did happen back then because nobody else could code. And that meant that I got involved in things that you would normally not get involved in as an undergraduate student nowadays. And that really shaped my direction because I quickly, I do find physics very fascinating.
I’m a big fan and still a reader of physics. I would probably also think of myself as a fairly good physicist. I also got good grades in that, but the thing that really drove me was what computers could do to power physicists more than being a physicist myself. And that got me involved with some early projects to build computer clusters.
So again, remember this was in 1993, the networks were slow and unreliable and simpler, meaning only one computer can speak at a time; not particularly efficient. So that meant that real computer clusters were hard to build. So all of the high-performance computing initiatives at the university at the time were based on much bigger single computers that the Institute was buying from IBM, actually, and the machines we bought at the time at the Institute. And these machines cost several million kroner a piece. And that was what the Institute bought to do computing on. And I got involved in actually buying these machines and figuring out which ones we should buy because I knew stuff about that.
So I was pulled in by a professor to help with that. And then I and a friend got the idea, hey, we can build these supercomputers ourselves based on commodity hardware in a networked environment. So we essentially came up with the idea of building a computer cluster to run simulations of things. So I started looking for an operating system that would allow us to build such a cluster. At the time, I was using an operating system on my own machine called Meenix, which is a 16-bit Unix clone that was not particularly efficient. But then in looking around and came the 32-bit machines. And then I looked around.
How did you look around? Because you obviously didn’t have Google.
No, I didn’t have Google, but actually, we did have access to the internet.
So I met the internet first in 1991 when I started working on various computer things at the Institute. We had a computer available to the students. That machine was an IBM i6000, but it was one of the older models. So we students could play with that. And that one was connected to the internet, the nascent internet of the day. So I was able to search around, not on Google, obviously. I remember the internet before the World Wide Web, in the beginning, you used to go for an Archie and those kinds of protocols to search for information and a lot of Usenet. So these bulletin board services. This was the kind of method I used to search around.
Then I came across a very early publication by Linus Torvalds of the leading operating system before version 1.0. And he was just at that time working to actually create a 32-bit operating system like Meenix. I think he even described it as a 32-bit version of Meenix. It isn’t because Meenix was a microkernel architecture, whereas Linus Torvalds for various reasons decided on a monolithic kernel approach. But anyway, it was definitely a 32-bit Unix-like operating system. So I got involved with that because that made sense because this operating system was what we needed to build the computer clusters. However, we also needed to write some network drivers for that to work because the coaxial
cables were just too slow to build a cluster based on it. And the Institute had just bought something called a copper digital data interchange system. Very expensive networking technology. Was crap really, but that’s what they bought. And it was 100 megabits. So it was in many ways much faster than the contemporary approaches. But that meant that to use the Linux machines, we had to actually hack the kernels to support this network architecture that we had.
So you basically had to build kind of everything from scratch, assemble everything from scratch.
More or less, we essentially built operating systems from scratch by fine-tuning Linux to actually work on this. And it consumed me. This is what I did. Less and less physics, more and more of that. And then different kinds of physics researchers would come to me and say, hey, can you help me with this or that? Actually, me and my friend, that other guy also called Casper, and we just loved helping with these kinds of things. So I became less and less of a physicist and more and more of an in-house computer nerd at the Niels Bohr Institute.
And they paid you for that or do you just…
Sometimes yes. So some of these projects were paid and sometimes they were not. But yes, I think that’s pretty much what I was living off. It was a paid thing. And then we started to run into this idea of quantum lattice gauge simulations. So I don’t know if you’re quite familiar with the approach, but what it really is about is: You think about particles in quantum space as if they can’t take a continuous path from A to B. They can only take a discrete path where they jump between the centers of cubes in a lattice space. So instead of going from here through some arbitrary path to there, then they have to take discrete steps through a computerized lattice space.
Like I have a ball on a hill and instead of having the smooth surface of the hill, I just have steps. And then add to that a Feynman path integral where what you’re really trying to do is you want to sum up an infinite set of possible trajectories through space and time for a particle to move from A to B.
That’s a Feynman path integral. And that can be solved analytically in some situations, but the moment you have two or more particles that you have to do a Feynman path integral over, you can’t. So if you want to understand the behavior of n particles in quantum space, then the only method actually is through quantum lattice simulations, unless you have another experimental setup.
So if you want to simulate it, that’s the only way you can do it. So you do it in this lattice space where you chop up space and time in little squares or little boxes. And then you can calculate it to completion within that lattice space, all possible trajectories that particles can take. You do your final path integral in that space, and then you can make the size of the edges smaller and smaller. And then you can see, does this model converge to something? And then that is the result of the Feynman path integral. And then you can verify experimentally that that actually matches. So that’s the rough idea in a quantum lattice gauge simulation. So that’s what we did with the clusters we were building. And I quickly got the idea that this is really a graph search algorithm. What are you really doing there is you’re actually searching through an infinite space of potential graphs, and graphs here mean nodes and edges. So the computer science sense of the word graph, where you have a ball connected by edges.
I have a train network and I’m going from Berlin to Barcelona, and every train station on the way that somewhere in Europe is a node on every train track is an edge. And then I can transport things in various ways through there.
Exactly. That’s what a computer scientist would think of as a graph.
So a trajectory of n particles through a lattice space is also a graph because they can meet and interact. They can also change themselves as a self-interaction. So you can actually think of this as a node and edge thing. So anything that can be thought of as a node and edge thing can be studied with the same methods as you study Feynman path integrals in this lattice space. And that’s, regardless of whether it’s a traveling salesman-like graph problem, or if it’s a logic diagram, or if it’s a mathematical equation because a mathematical equation can also be written as a graph.
If I say A plus B, you can think of that as the nodes being plus and A and B being nodes. And then there’s an edge that says A goes over to plus and B goes over to plus and outcomes. Now you have translated the A plus B to a graph, you can do the same with any mathematical expression. You can translate it to a graph and back.
There’s a bi-directional way to do that. So that means that we can actually think of every trajectory in this space as a mathematical formula or equation. And then we can use the methods that have been fine-tuned to study this kind of problem to study the space of all mathematical equations. That is the core idea of actually what we do here at Abzu.
And that idea was given birth to in the late 1990s. Yeah, I usually think that it was around 95 because that matched chronologically with what I was doing at the time. I can’t really say for sure if it was 95, 97 or something, but it dawned on me that we were not studying Feynman path integrals. We were actually studying graphs.
Was that a dawning, was it a progressive process during your quantum field theory lecture or was it just sitting there and you saw the formulation just…
It’s an implementing these things. The moment you realize, “How do I actually implement it?” Well, I have to make a structure that represents the state of each lattice and it can either be active or not active in a certain time step or space-time step actually. But at any given time, it can be active or not active in the graph. And then you want to store this lattice and the lattice gets bigger and bigger because you make the edge sizes go down. And how can you store that efficiently? Well, you store it by storing just the active nodes and the connections with the other nodes. That’s essentially a graph.
So suddenly you can’t really implement this without realizing that you are actually studying graphs. Of course, if you’re not a computer scientist, you may not think about it as something that has any interest in that sense. But I did. I can’t tell you why. I just did.
It came to me, this is a general graph search algorithm. But I didn’t do anything about it at the time at all. It was just an idea and I played with it and let it rest. Because what I did instead was that the other guy, Casper, the other Casper, and I had developed some software that was designed to run directly on the network cards of these cluster computers which allowed us to do a lot of optimizations of the routing of the TCP track in a network cluster. So essentially a smart network card that was tailor-made to build high-performance computing systems.
Like an actual physical card.
An actual physical card.
This network interface card, existed with FPGAs on them so you could actually code them in a very primitive way. So you could do things on these network cards that could offload work from the central processing unit of the computers. And in that, inside of these cards or in the computer itself on the network, on the processing layers, we could also do stuff there.
But fundamentally we built network interface cards that were fine-tuned for people who wanted to load balance tasklets or tasks on a cluster. And then we started a company to try to produce and commercialize and sell this technology. The company was called Unispeed, and we started it in late 1999. So that is really interesting in one sense because we had early traction. This was the heyday of the dot-com era.
And our first customers for this technology were data centers at big network operators at the time. And those were the dot commies. So I think our biggest customer was UB, which was a Yahoo clone in Denmark by Casper Larsen, Martin Torborg, and Henrik.
Yeah, another Casper.
And that’s interesting because that other Casper, Casper Larsen, actually ended up investing in our company, so he bought a share in Unispeed and then it was Casper, Casper, and Casper doing this crazy technology.
And we sold it to UB obviously first, but then also to other of the bigger companies at the time, like Ashiro. We also ended up selling it even to eBay and companies like that and they kind of emerged.
But then the dot-com bubble burst and it didn’t kill us. We found kind of a way to survive as a company. But suddenly the old customers were just struggling massively. And it was no fun from then on and onwards. And we had a bumpy ride from then until 2006, and then I just exited and sold it. I think I sold it in 2004, and I had to spend two more years. So that was what took me out of physics and into the startup world. So for five years, Unispeed was all that mattered and we went from being the three Caspers and I think at our peak we were 18 or 19 people, and then down from there.
That I guess changed my career trajectory a lot because suddenly I was not just a computer nerd with physics aspirations. I was a computer nerd with business aspirations.
When you sold it, was that more of a relief?
It was a relief. We had been through two booms and busts. So first the dot com, and then we found a new business vertical. But the new business we went into was suddenly copied by one of our collaborators and then they kind of stole the entire market. And then we went almost bust again, and I repaired the company once again.
And then two years later, that would have been another birth.
Yeah, I guess. And you can never really say, “Yeah, it was no.” Sometimes you just have to wash your hands and start over with something new. So I sold Unispeed.
And then I had 10 years as a consultant, and I joined a very newly established consultancy company called Best Brains that was founded by another friend of mine. And I came into that company with this, let’s make a big business out of this. Now I was the businessman and I wanted to take my new knowledge and bring that to some kind of… No, wait, first I wrote a book and then I sold that book to a publishing house and then I wanted to be, I was thinking, should I just change direction entirely and write fiction?
By then you were? Mid 20s, late 20s?
The late 20s, yes, I think 29 perhaps. No, that’s not true. Actually 35 or something like that. This was in 2006.
For some reason, I calculated you got your first computer when you were freshly born.
Not quite. I was born in 71. That was in my late 30s, mid-to-late 30s when I wrote “The Land of Forgotten Gods”. So anyway, that was that year, a little more than a year, sabbatical thing. I also had my first kid. No, I didn’t. I had my first two kids during Unispeed, yes, I had my third kid while I wrote that book.
Do you have more than three kids?
I have four kids.
Okay. There’s this weird pattern that I keep seeing that people in Sweden have three kids and that seems to be the golden amount for all academics in Sweden for some reason.
It is actually a pretty good number. However, I think two and two are also very good because they can be solo twins or really have each other. So my two oldest are 20 and 18 and the youngest is 13 and 10. Actually, he isn’t 10. He turns 10 in a few days. So they’re tightly coupled in pairs, and I think they enjoy having each other a lot, children. So I certainly am happy myself that I had more than one for sure. I also have two brothers, so I’m also part of a family of three.
But I had four and my brother has five. So we apparently wanted to surpass our parents. We are a lot of them. And now they’re big. His children are also grown-ups now, so you can imagine the get-together with a lot of children.
While I was writing that book, I was thinking about, “What now?” And then I ended up joining Best Brains and I came into Best Brains with full ambitions about taking this rather interesting consultancy community-style company into a big success. I also was at that time very interested in organizational culture and how to bring the best out of people through good work environments and good work experiences where you can really flourish. So I had a lot of those ideas with me into Best Brains and Best Brains was a good success. It was a very nice consulting company.
I worked a lot with, I started finding a special, a Casper speciality, if you will, where I would help big companies who wanted to do what we kind of dubbed internal startups, who wanted to be more start-up in some internal business initiative. So I joined some of these companies as a consultant through Best Brains and tried to build up a business line around selling these kinds of, we call them ‘Ignite’ projects, where you wanted to reignite your business by having a startup that is incubated inside of an existing corporate, but the idea still exists. It’s very challenging to make it work in practice for a lot of reasons.
I keep seeing people with job titles similar to in-house entrepreneur or intrapreneur.
Yes, exactly. It’s still a dream, but I think there are a couple of things that you have to accept as a corporate if you want to get that to succeed and it’s things that are very hard for a lot of corporates to accept.
It has to do with letting go of control and it has to do with accepting that an internal incubation can fail or succeed on its own terms. And if you try to, as an external stakeholder inside of the organization do what you think is right to make it succeed, then in a way you’re actually killing it or making it into another department in your organization.
And that’s just for most leaders in corporate environments. It’s an unsurpassable barrier to accept what they perceive as wrong decisions being made inside of an incubated project inside of the company. So control processes creep in bit by bit, and then suddenly it’s no longer a startup being intrapreneur anymore at all. It’s just yet another business initiative.
I’ve seen that happen every time. So that’s what I dedicated myself to, I think from 2009 to 2015 or so. But I also started working with, I had my network, my connections with the VC industry, the venture capital industry, also quite immature at that time compared to what it is today. So I worked as a consultant for a lot of the Danish VC funds in the late 2010s, and early 2010s, from around 2008, 2009, and onwards. And what I mainly did was due diligence which is assessing the quality of a potential investment candidate on behalf of these funds. And I’ve seen during those years, I’ve worked on or helped make investment decisions in more than 80 different startup companies. So I’ve seen a lot of startup companies and I also have seen how they would develop. And I also collected some data on that. So I really thought a lot about what is it that makes startup companies great. And I think I’ve learned a lot there.
What was it in hindsight?
Well, there’s a trivial and perhaps often overemphasized thing, but it probably isn’t overemphasized, and that is: It is about people.
So if you go in and assess a company that has the stupidest business idea made by the cleverest people, they will have a much higher chance of just succeeding than the most excellent business ideas by people who are not as qualified at building a company.
Qualified in terms of hard skills or how they interact?
It’s a combination of, I guess, what people would like to call leadership skills, but I really call it motivational skills or creating an environment where creativity and skills can really flourish and come to fruition.
Yeah, let’s call it leadership, but it’s more than that. It’s also a coaching mentality combined with acceptance of the reality of business life.
Certain things just have to work for a company to be successful. It doesn’t necessarily have to make more money than it spends. That’s definitely not a thing in the startup world, but it has to make money and it has to have a clear idea about how are you solving a problem that is from an outcome perspective important to some people who have money. And it’s so tempting to, at least I think in the dot com bubble, that was probably the main problem and it’s not even my quote, but I think the way it was often formulated was: The business plans of the day was very much “Have awesome technology, build a cool product, mumble, mumble, make money.” And the “mumble, mumble” part is where the battles in startup companies are won or lost. If you can get the “mumble, mumble” to not be “mumble, mumble,” then you will succeed. Even if your product is not particularly cool.
The most recent example is, what was it, Fast, three or four weeks ago. It’s a company that had a hundred million in funding and I don’t know how many hundred employees, but managed to make a hundred K in one year for fast check out and it just loaded itself based on that.
Exactly. And that’s, that’s a danger in startup culture, right? Because we, in startup culture, do actually accept the no revenue or low revenue situation. And that’s a good thing.
I mean, it allows it to do more risky things.
Yes, exactly. For a while. But of course, you have to know when that while ends, you have to have the business acumen to understand what are you trying to figure out while you’re not making money. And that thing had better be, how are you going to make money?
And not necessarily only that, you can think about a lot of other things. Like here at Abzu, we think a lot about AI and the future of humanity and philosophy of mind and all sorts of things that really motivates us to do what we do better. But it has to all be combined into an idea about, “Okay, what does this mean in terms of outcome for somebody who has money and who will pay you for that? If not now, then later?” And if you get that right, then you’re going to succeed. And that doesn’t matter so much what your product is. Of course, if you have cool technology or a cool product, then the success may be bigger. But in terms of the VC business models, it’s not really so much that. There’s always an element of luck as well.
You can also have a business idea that’s just very well suited, doing something for company offices, and then suddenly you realize that there’s a COVID lockdown. That means that nobody actually invests in their office space for a while. And that’s not a good thing if you’re a company that has a product that’s focused on that. So luck plays a role as well. But what plays less of a role is actually the product in the initial phases. And you can even say the technology.
I think that when you look at the Danish startup community, at least back then, I think it’s starting to change, but the Danish startup community of in the ’00s and ’10s was very often very tech. That was also my pedigree, a very tech person who tries to make some kind of business, whereas the US / Silicon Valley approaches more often a business person who has some kind of idea. And then he finds a tech co-founder to help him realize that idea.
And I’m not saying one is better than the other, but I’m saying that you can be fumbled with the tech for a while without that destroying you. But you can’t fumble with the business for a while, only very briefly, at least. So if you’re not good at the business side of things, you’re not going to succeed. Whereas if you’re not good at the tech side of things, you can get that skill later on. So I think that the American startup culture is probably a bit more healthy in that sense. And I think it’s also changing. But that being said, I have met tech founders, people like Elon Musk is a tech founder, so clearly a tech founder can also be successful.
But if I were to just blindly put money into a businessman with a product idea pairing with a tech person, or a tech man with a tech idea pairing with business man, I would take the first one.
Thank you so much for listening to this episode. If you want to hear more, subscribe to Deep Tech Stories wherever you listen to your podcasts, or follow me on Twitter @sturmerph. You will be hearing back from me in two weeks with the second part of this interview where Casper explains how he built Abzu and how he got his first customers using the QLattice.