In this week’s special edition of “News You Can Use” on HealthcareNOWRadio I sat down with Dinakar Munagala, CEO of Blaze, to explore the company’s remarkable journey from a garage startup to a NASDAQ-listed pioneer in Edge AI.
Dinakar shares his background in chip design at Intel, where he contributed to graphics processors for Apple and Intel laptops, before co-founding Blaze. He explained that Blaze emerged from a realization that traditional GPUs were ill-suited for new AI workloads, especially those outside the data center. This led to a new kind of processor architecture, designed specifically for real-time, low-power AI applications in the physical world.
One of Blaze’s key differentiators lies in its focus on edge processing, bringing AI directly to devices like smart cameras, autonomous vehicles, and medical equipment, where decisions must be made in real time. This creates its own set of technical and logistical challenges that set edge AI apart from data center AI, and it is their approach to this that sets them apart in the AI space, using efficient chips and intuitive software, making advanced AI capabilities accessible without requiring deep technical knowledge.
We dive into compelling real-world applications, particularly in healthcare, where Blaze’s technology has been used to detect COVID-19 via X-ray scans, monitor retina images for early cancer detection, and support elder care by alerting staff to patient falls or abuse.
Most of all it is their design focused on augmentation, not replacement of humans, addressing both utility and liability concerns. As AI continues to permeate every aspect of life, expect to see Edge AI playing a central role in delivering scalable, secure, and impactful solutions right where they’re needed most.
Until next week keep solving healthcare’s mysteries before they become your emergencies
Listen live at 4:00 AM, 12:00 Noon or 8:00 PM ET, Monday through Friday for the next week at HealthcareNOW Radio. After that, you can listen on demand (See podcast information below.) Join the conversation on Twitter at #TheIncrementalist.
Listen along on HealthcareNowRadio or on SoundCloud
Raw Transcript
Welcome to this special edition of News You Can Use. Brought to you by Blaze. I’m your host, Dr Nick and joining me today is Dinakar munagala. He’s the CEO of Blaze. Dinakar, thanks for joining me today,
Dinakar Munagala
Nick, thanks for having me here looking forward.
Nick van Terheyden
So if you would, I always like to start off the show, establish a little bit of your background, some of the history that got you to this point. Tell us how you reached here and what brought you to this point as the CEO of Blaze?
Dinakar Munagala
Sure, thank you so Nick. I went to school in India, and then came here to the US to do my masters at Purdue, and then started working for Intel. And Intel is a giant company as you know it, and a massive it’s notion within Intel, the graphics processor division is where I joined and started building graphics processors for the Centrino line of laptops as well as Apple Macbooks, right? Those are the products where our graphics chips ended up in and worked there for almost 12 plus years, and then left Intel and then, pretty much like a Silicon Valley garage mode company, my co founders and I, we started up Our company, which is Blaze, and we had ideas for a new kind of a processor architecture. We saw that the emerging workloads are very different and needs a new kind of processing. That was the genesis. GPUs were running out of steam and and for the new frontier workloads, that’s how Blaze happened, and then we, of course, went through our share of ups and downs as any other startup. But fast forward years of hard work and a lot of stellar team members who joined us along the journey and believed in the vision, and we have now product in the market, which is comprising of AI chips and software primarily focused on the edge of the physical world, AI. And we recently went public on NASDAQ. And here we are. Public is good to be in the public markets so that it gives us the visibility. But the mission is the same, to deliver compelling AI solutions focused on on the edge and the physical world AI, and we continue to do that, and that’s that’s where we are. So
Nick van Terheyden
you know, as always, what I find with my guests fascinating background, I’ve got to ask, I mean, it’s a burning question in my mind, maybe not on everybody else’s, but you were right there at the point of graphics processing. You look at Nvidia, which you know, has taken off big time because of AI. They must have been a large competitor at the time, is that true?
Dinakar Munagala
Absolutely, Nvidia is a stellar company, and at the time, and even now, Nvidia is massive. And the chatgpt moment that happened, and that propelled a you know, Nvidia into the whole AI training. And at the time, you know, Nvidia was so if you if you look at the compute history, computing history, Intel started off the whole CPU and Pentium, and how do you run operating systems better and faster and and so on. Then along came the next generation workloads, which is gaming. That the fundamental work was, how do you run Graphics Processing many pixels on the screen, right in more efficient manner, parallel process that all the data and the pixels so. Now you fast forward to AI. It’s a new kind of a language and a new kind of computing, but absent of any dedicated processors. It’s being retrofitted for two existing processes, which include GPUs. But that kind of parallelism that you can extract for AI is different. It’s called data flow graphs parallelism, and at Blaze, we’ve specialized in this and how you run that efficiently. Now, AI has two parts to this, the whole deep data center, where Nvidia is on a tear, the part that is outside of the data center, which is the physical world AI, this is where everything from industrial automation, defense, smart cities, healthcare, education, there’s about a million times larger amount of data and that much of an opportunity, and it is an underserved market, and that is Where we have chosen to focus and place, and we we have real products now and and this opportunity is where we are. So,
Nick van Terheyden
I mean, I just imagine that this was just a formative time in your career. Really exciting to sort of see that take off. You know, obviously, the the inflection point that we’ve seen with chatgpt. Chatgpt, I think that predates or postdates what you’ve been doing, although this technology has been in development for a long time. You bring up, and I’m not going to dwell on AI, we’ll just sort of take that as a given. I know there’s lots of complexity, but for the benefits of the audience, you know, we’ll just assume we’re talking in the broadest terms, but you’re mentioning something that’s a little bit different, little bit unique, and I think important to understand, and that’s the edge processing. And specifically, I guess, at the coal face is how I would describe it. Help help everyone understand what it is about that technology and the way that you’re addressing it that is important,
Dinakar Munagala
sure. So let me contrast it right. When you took when you look at something like a chat GPT or something in the data center, what you’re essentially doing is you have a massive amount of compute. There is no power limitations. There is no especially for AI training, it is, it is done, not in a real time setting. So, so you do all of the compute in the data center right, and manage it in the data center. When you shift to real world applications, physical AI, it is a car driving at, let’s say, 80 miles an hour, and you need, but go on, you need to make a real time decision. You can’t send it back to the cloud. So, so real time, this is very critical. So doing the same compute, you having so many sensors and making a decision, right? There’s often thermal and power limitations. This could be a camera on a pole in a desert in a defense setting, so you need to be lower power and lower thermal you cannot. So that’s a critical thing, and often these are cost sensitive devices as well. It’s not about packing giant servers in a car on a lamppost. You need to do them in a modest cost. So these are some of the things, and then very fundamental and one of the most important piece often overlooked is the knowledge gap in the data center. The customers are very AI savvy, right? It’s the Google, Facebooks of the world, apples, etc. They have all the AI knowledge and the software programmers and so on. When you shift that to any kind of an edge use case, it could be factory flows, smart cities and so on, defense, healthcare, etc. They’re good at what they do, but they’re not necessarily experts at AI programming. So how you deliver the AI solution so that it’s truly simplified, easy to deploy, as though you could operate a Mac right, or a windows up system? It should be simplified, dumbed down to that level of simplicity for AI proliferation. So it’s a combination of efficient hardware and easy to use software, and that is very different, and that’s where we’ve excelled, right?
Nick van Terheyden
So, and the sort of the juncture for you, which I don’t know, all companies that we’ve seen a lot that are moving into the hardware space, the blending of the two. That’s the delivery of both the hardware, in this case, chips. And I don’t mean to oversimplify it, but you know, for the purposes of the conversation, and then the software that goes in hand with this, obviously, with our focus in this show, we think about healthcare, it sounds like there’s you. Know, just broad application as there is for AI across, you know, many industries, but tell us a little bit about how it’s being applied. What you’ve seen in terms of use cases for the healthcare setting, sure
Dinakar Munagala
we’ve been doing some use cases in healthcare. Let me back up a second. So the generality of a processor is such that it is truly programmable, so you can apply it to different use cases, and what is applied to one use case the same AI foundational, the model can actually be applicable to another one as an example, if you’re trying to look for cracks in a magnetic tile on a factory floor. That same AI model can then be reused for detecting, you know, weeds on a on a farm. It can also be used to detect x ray images, right, cracks in a bone, etc. So there’s a lot of reuse at the core foundation level, right? There is some customization at the application layer, applying it to x rays versus healthcare other use cases. So that’s where the changes. So we built some of our software. We partner with software companies to deliver solutions, but staying on your question for healthcare, we’ve applied it to x ray images. We’ve applied it to embryo clinics, right? Fertility, where you’re doing this concept of almost like a truth witnessing, right? Are they doing all the sequencing correctly for compliance reasons, so that you know you could do it with a person, an expensive scientist, whose only job is monitor the other scientists, whether they’re doing it in correct sequence, or you can apply a camera and AI to that. So there’s another use case, interesting one, which is retina right looking at the retina images and identifying some some cancer cells more accurate than human we’ve been part of such an exercise. We’ve also been part of in the defense right, how you can triage in a setting, right to who gets higher in a real time setting, you have to the medic has to make a decision, who gets a higher priority versus a lower priority, right in terms of triaging and so on. There are many use cases, X rays. The X ray radiology is a very interesting use case. It’s very universal across the planet and across many diseases, and, you know, so on. So that’s that’s an interesting one that we’ve been part of as well.
Nick van Terheyden
So, you know, folks that listen to this show will be familiar with it. We certainly talk about it fairly frequently. We’ve seen radiology, AI technology for a considerable amount of time. I mean, I wouldn’t say it’s new. What is it about what you’re doing that’s different, that changes that equation?
Dinakar Munagala
So let me tell you maybe a real story that happened at our end, how we even got into this, when, when COVID was happening. We we were we were not doing anything for healthcare. And COVID was quite you know, the circumstances were dangerous for the entire planet that we didn’t know happened and who would survive, who would not. There was no vaccine, and there was not even know how right, the only thing that was possible was if you identified a patient with COVID, then isolate them and hope that it doesn’t go to the other person, doesn’t spread. So I got my data science team together and said, Hey, look, all of this is happening. How can we actually be part of the solution for all we know we might not exist tomorrow. Let’s see what we can do to help. And it’s not so much about commercializing it or anything, just about putting our skills to use. So we wrote to a few healthcare agencies here in the US as well as overseas, saying that, look, we want to help. What can we do? Part of my days, data science team, what they came up with is, look, let’s do this. Let’s look at the x rays. Because every tier two, tier three town across the planet, they have X ray machines, and a technician is sufficient to go look at X rays and, you know, detect it’s a COVID or not. So let’s build an AI model to look at X rays and say that it’s either COVID or it’s pneumonia or it’s it’s something else, and equip them, and then they can go off and without even a doctor’s intervention, potentially. They can advise the patient, it’s COVID, so stay home. So our team actually, within a week, put together an AI model. At the time, initially, I remember it was already about 70% accurate, and over a period of few weeks, they made it 90 plus accurate, percentage accurate, which was which was good. And we actually donated it to governments. We offered it to a few different governments, saying that, look, we’ve done this work, and take it and use it right. In fact, we even put up a small cloud just for privacy reasons. We said, Look, we’ll anonymize all the data, upload your X ray, and you’ll get an answer, and you can do what you want with it. So we did all of that. And we, you know, I was very proud of our team, how they did this with our our software, tools and hardware in a in a short duration of time and and fast forward, and there’s one very interesting, unique thing, and by the way, we’ve patented this as well, is the progression of COVID. If you recall, there was this 10 day period where, you know, it kept getting worse, and you’d either recover or you’d be worse off. So we even figured out AI algorithms How to know the progression of the disease, right? Is it getting worse or better? And so that that could give some insights. So all of that was done Now, interestingly, fast forward, two or three years later, a hospital in in the Middle East, they had a lot of migrant workers coming in, and they in mass they wanted to actually test for tuberculosis using X rays. And if they employed people, radiologists, they could do maybe 123, 4x rays per hour. But AI could actually make it like, you know, 10s or hundreds of X rays. The scale is massive, right? So, so that was a use case. So it came down to scale. It came down to AI becoming an assistive tool, and where applicable, they could do progression of the disease, right? So these were the newness that we brought to the table. And to answer your question.
Nick van Terheyden
So all of that brings out the question in my mind. First of all, it comes up frequently as the training aspect to this. What’s the source data? How do you go about that? And then if you come up with an answer, who’s responsible? Are you responsible for? It? Does this fall down? How do you sort of handle the I’m going to call it liability. I hate using that word, but ultimately it is, I mean, we live in the United States. That’s, you know, foundational to the practice of medicine. How do you handle those two things?
Dinakar Munagala
Sure. So maybe the first part of the question, right? The training, training data, of course, it’s, it is difficult, but once you get a few few X rays right, or few images, they are tools that we have developed, as well as tools outside, wherein you can actually, if you have 1,000x ray images, you can make them look like 10,000 their tools, or even 100,000 you can basically make The crack a little bit different automatically, and train the AI model on these So, so this is to augment what you have and make them amplify the data. So training there, you can solve it that way, but somebody actually looking at the data and saying, finally, yeah, this is correct or wrong, etc. There is a human element and that that exists as well. They are, of course, automation, automated tools, etc, happening in that area as well. So a lot of combination of data, some people and some automation solves that part of the problem. Now the second part of the problem, the liability. Thing that you mentioned, we often, we not often. We always at Blaze. We look at our tools as an assistive technology, not to replace the doctor, but to help them with a view and let the doctor ultimately decide. Now within this, you’ve got to separate out US versus rest of the world. In the US, of course, the liability insurance, all of these things are very out of a different strictness measure. Not that it doesn’t matter in the other world, in the rest of the world, they’re they’re willing to get efficiency in without worrying about liability and then, not that there’s a mistake, but the point is that they’re more the nature of the problems are different, that hey, how do I actually get care into tier two cities or tier three towns where there are no doctors? That is a fundamental problem before they get into who’s liable so and across the place though the we. We try to stay as an assistive technology and let the doctor still decide, or the technician. So that’s how we’ve approached it.
Nick van Terheyden
So as you think about this, obviously, well, I say obviously, it feels like it’s still early days. You know, it’s early days across the spectrum of AI, but specifically for edge AI, you’ve obviously got different strands from a healthcare setting. Where do you see this going?
Dinakar Munagala
I do think it’s going to go into every walk of our life. Healthcare being no exception, the rate of proliferation, of course, in different geographies and for different reasons, like you mentioned, liability, etc, will be different. It can also it will also go into simple and but very, very interesting, powerful use cases, like, for example, one particular project that we’re part of is an elder care project. What it is is, essentially, you have these elder care homes where the ratio of the number of residents to number of staff is like almost one is to 10, or one is to 20, or maybe more. So how do you actually monitor an elder care elderly person in the nursing home, if they’ve fallen down or something, right, alert, etc. Camera can come to rescue, right, looking at the person and saying, oh, there’s somebody in room number 35 got to go urgently because they’ve this fall or etc, potentially life saving. So these use cases, absolutely, there’s another part of that, actually, it’s very interesting. In in some of these places, there’s actually abuse that’s happening towards the elder by the staff. And the government wanted to detect that, and they they sent an alert automatically to the government server, not not to the hospital server, so to preserve forensic evidence and take care advantage of it. So these are interesting use cases that they’re already trying to apply AI to solve real problems of importance, and we’re actually quite proud to be part of that project. Money wise, will make, of course, it not every problem is about the max revenue, but it’s about doing bringing AI to the real world problems that we can actually solve. Great.
Nick van Terheyden
So looking back, I mean, obviously exciting times, you’ve been in places where things are developing at a rapid pace, as you look back on all of the things that contributed to this point, where would you say the specifics came from? Can you pull out any of the threads of what helped develop this particular thread of innovation and how Blaze came about?
Dinakar Munagala
So Blaze, we actually started off in garage mode, pretty much with, Hey, we’re seeing new kinds of algorithms, and let’s build a new kind of a process for the frontier workloads. And interestingly, at the time, on one of the trips to Japan, an automotive partner of ours, Denso is an investor, and they put us through rigorous testing and compared us to GPUs, compared us to other processors, and they concluded that Blaze was truly special, and they invested and, and interestingly, all those use cases were around local processing, right? What are the key things you’re doing? Things locally, real time, power efficient and, and all of those aspects, a car is an edge device. Similarly, many, many physical AI edge devices have the same kind of algorithms, same kind of use cases, customized through the software applications. So that was kind of the genesis, and interestingly, that was the foundation, but now it’s applicable to a whole bunch of use cases.
Nick van Terheyden
So I mean, clearly, healthcare is under this increasing pressure, you know, to deliver faster, more accurate decisions. We’ve got more data swimming around in the stuff, you know, critical patient information, hopefully at the point of care, maybe not. It sounds like this is the nexus of the next sort of innovation. You see this future where there’s edge processing everywhere, with Blaze involved in, you know, almost every element.
Dinakar Munagala
I think so, yeah, it’s a it’s an excellent time to be at where we are, the cusp of innovation, innovation applied to places in the edges and including the healthcare. And you hit it exactly right. Healthcare has one more very interesting thing, which is privacy of data. Data. You can’t always send this to the cloud, and countries are very sensitive about not sending it to their data to other countries and so on. So, which means that you need to have solutions which are completely localized in the hospital premises, in this in the country and so on. So, so those kind of other reasons why edge is favored in the healthcare
Nick van Terheyden
so exciting times. I mean, I think, you know, real disruptive technology once again, albeit, I would suggest, you know, it takes 10 years to be an overnight success. I’m sure you’ve lived and breathed that through, you know, all of the joys of garage mode all the way up to, you know, listing on the the exchange, you know, the ability to process data locally with Edge AI. So another version of I’m going to call it version, but, you know, another instance of AI, but I think important this local instance that allows you to deliver real time, accurate but also secure, because it removes the need to transmit makes for exciting times. Unfortunately, as we do each and every week, we’ve run out of time. So it just remains for me to thank you for joining me on this special edition of news. You can use my thanks, especially to blaze, and especially to you. Dinekar, you can find out more about their company@blaze.com and that’s B, l, a, i, Z or Z, as I prefer to say.com, Dinakar, thanks for joining me.
Dinakar Munagala
Thank you very much. Nick pleasure talking to you.