- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Notify Moderator
In this episode of Alter Everything, we sit down with Eric Daimler, CEO and co-founder of Conexus, and the first AI advisor to the White House under President Obama. Eric explores how AI-driven data consolidation is transforming industries, the critical role of neuro-symbolic AI, and the evolving landscape of AI regulation. He shares insights on AI’s impact across sectors like healthcare and defense, highlighting the importance of inclusive discussions on AI safety and governance. Discover how responsible AI implementation can drive innovation while ensuring ethical considerations remain at the forefront.
Panelists
- Eric Daimler, Chair, CEO & Co-Founder @ Conexus - LinkedIn
- Megan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedIn
Topics
- SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
- Uber Data Consolidation
- Alteryx FREE trial
Transcript
Alter Everything
Ep 178 LLMs and AI Regulation
[00:00:00] Introduction and Event Announcement
Hey, alter everything. Listeners, we wanted to let you know that you've been joined. Fellow data lovers, analysts and innovators at the Alteryx Inspire 2025 conference. It's the analytics event of the year, happening May 12th through the 15th in Las Vegas. Head over to alteryx.com/inspire to register now.
We would love to see you there.
[00:00:28] Meet Eric Daimler: AI Advisor and Entrepreneur
Welcome to Alter Everything, a podcast about data science and analytics culture. I'm Megan Bowers, and today I am talking with Eric Daimler, CEO, and co-founder of Connexus and the first AI advisor to the White House under President Obama. In this episode, we chat about how his company consolidates data with ai, his takes on AI regulation, and the key for a positive future with ai.
Let's get started.
Hi Eric. It's great to have you on our show today. Could you give a quick introduction to yourself for our listeners? Sure, sure. Yeah. I've been doing AI for a, a long time, probably more, more than more than most. How people know me and if they know me is often as the first AI advisor in the White House under President Obama.
My PhD's in the area. I was a researcher at Stanford in Carnegie Mellon, and ultimately a faculty member at Carnegie Mellon in computer science. Been a venture capitalist on Sandhill Road and I'm on my sixth startup. This most recent one is a AI spin out of MIT. So that's me. Awesome. Yeah, you have a incredible background, so I'm really excited to chat today.
That's kind. Thank you.
[00:01:40] The Genesis and Mission of Connexus
And of course, I'd love to start off with your current company, Connexus. If we could just hear a little bit more about what you guys do and what business challenges led you to start this company. Maybe I start with the last part first. This was a privilege I had from the position I had in the federal government, which is just seeing the largest AI implementations in the, in the world, really, and the difficulties in the, the, that people would experience from that.
We had ta, our TA taxpayers funded this technology to MIT to solve a problem in the defense department and at NASA around. The transferring of models from existing technologies into new technologies without having to start all over from a fresh slate. There's advantages of starting from a fresh slate or clean piece of paper, but you know, the disadvantages are equally obvious.
You lose years of. People's work in, in modulating modeling aerodynamics in a, in a, in an airplane, or interactions in a, in a rocket. This type of interaction and data is foundational to trusting the output of a data in an AI model. So this was funded at that time and went in passively after I got out of the federal government, just as a interested investor, and then decided to jump in full time.
Not long after that. What Conexus does is, broadly speaking, the, the infrastructure underneath ai, we often know that we can't trust the output of AI models, as it says at the bottom of many of these large language models, but we often can't trust the input because the data's not really always available.
You, you can kind of imagine the general in the Army, they're asking some subordinate if. All the data's been collected and is incorporated in, in the conclusions being given to, to her. That's really not a answer that can be definitively given. You might, you might also often just say, no, I haven't given you all the data, but, but I don't know where it is.
You know, I don't know what's missing or I can't count on it. That'd probably be the, the honest answer. So what Connexus does is it uses. A neuro symbolic ai, a deterministic AI combined with probabilistic ais to guarantee the integration of data across large systems. These could be expressed as simple as database migrations or, or automated lift and shifts all the way up to much, much more complex data integrations for the largest organizations.
The benefit of which is that. You can count on your data and you dramatically decrease the cost and the scheduling for integrating the data that you need to solve ordinary problems. Gotcha. Very cool.
[00:04:35] Connexus in Action: Real-World Applications
So I know building large language models is at top of mind for like a lot of listeners, a lot of companies.
So Connexus, how does it fit into that workflow of building a large language model? It's really complimentary, so we might have all learned in, in grade school a philosophy about induction and deduction. So these large language models are all inductive. All I've seen are white swans. Therefore, all swans are white until, until proven otherwise, that'd be in induction.
A deduction is, well, I know that all birds have wings. This animal has wings. Therefore, it's a bird. It's just coming at this completely in opposite way. It's deduction and induction. You know, Connexus will use a deterministic that, that's why it says symbolic AI to deduce facts that are already present in an organization's data.
That is a model that is best applied. To operations versus exploration. So you and I, the rest of the world are summarizing emails and perhaps creating better ad copy using large language models. But if you wanted to run a power plant or design a rocket, in the case of that was genesis of this technology.
Or again, any number of other critical issues, then you'd use a neuros symbolic ai. You know, I was with a friend not so long ago who working at a hospital is, was trying to treat patients that was fighting against his hospitals. I. Five departments of unintegrated data. He, he said, I, I fight against my hospital's own systems as much as I'm trying to treat my patients.
And this is patient clinical data that's just not available to the clinicians with the speed and assuredness that really needs to be present in those environments. So there, these are complimentary technologies. And maybe we can go a little further. You know, when Uber originally started, you know, everybody understands the Uber business model.
They gave a lot of freedom to the people building out that company. The idea was grow as fast as possible There. Result was that every city would have its own database and so you'd have one for, oh, really? For Denver and another one for Boulder. And, and if you needed to reconcile the driver supply or writer demand based on a, a blizzard, you know, you'd have to act, actually have literally manual comparisons.
You could have these very high, highly paid engineers literally pivoting their head. Back and forth between two screens in some cases, comparing the statistical models for Boulder and Denver in this example. They ended up developing a team of hundreds of people that were doing this manual work globally.
You know, not just for those ordinary sort of business analysis questions, but also for regulatory requirements. In, in some jurisdictions there are different privacy requirements for license plates versus driver's licenses of, of the drivers. Again, requiring some portion of that a hundred person team to be manually ensuring the, the integrity and conformance to these regulations.
That happens all over the globe. But this particular one happened at Uber, where it's others, very smart people, realizing that this is not optimal and wanting to solve this problem. You know, they tried their best to, to solve that in the way that they knew how, but it, it ultimately came down to requiring some new technologies.
And that new technology in this case was Connexus. So Connexus came in. Automated the process of integrating all of those, in that case it was like 200,000 databases at that point, into a universal data model that then could, with alacrity, give leadership the business answers that they were looking for and give the legal counsel.
The comfort that they're conforming to regulations in a way that guarantees integrity. And then those couple hundred people could do much more interesting work because it's just, it's not, it's not much fun for anybody to be just doing glorified cut and paste or, or looking between screens. That's a right easy to grasp example of deployment of conexus where it provides value deep in the infrastructure, uh, of ai.
I really like that example. And honestly, you look at these big companies, big tech companies, and think that they have it all together, but to hear just how like disparate. The data could be underneath is crazy to me, but also so important in the age of ai. And if you wanna do anything, really even just analyze your data.
It sounds like there's a lot of work to do for, for some of these if their systems are looking like that. And it also makes me think of, I mean, here at Alteryx are. Product. We hear similar stories about having data in different systems and then the time saved when they use Alteryx to bring it all together.
But it sounds like Connexus would be like maybe the step before where you're just getting data into a data model and then maybe someone could use Alteryx after that, or how do you see those two fit together? We're complimentary to companies doing analytics. So, you know, we don't do the actual analytics.
We compliment other companies that are doing the analytics. You know, we are not creating data lakes or data warehouses. We, we are complimentary to all of these processes. I. There, there are a whole bunch of little point solutions that provide portions of what we do, and you could to say that our automation from the neuros symbolic AI is an amalgamation of all of these little point solutions into one.
You know, that allows databases to become a, a part of a universal data model and complimentary to all these other excellent offerings. Definitely. Shifting gears a little bit, I think you mentioned earlier a piece about the importance of being able to trust your data, trust the inputs. What do you think data practitioners should be looking for when they're looking to like build LLMs Dependably and be able to trust the results?
We're looking at verification, provable verification on the inputs. You know, we want to be able to dependably represent the data that has been included and that it's been included in a way that integrates cohesively so much of LLN development. Take some truth and then some more truth and some more truth, and still create some untruth out of that.
But, you know, the underlying infrastructure often is incomplete. And so the, the place where we look is in verification, the provable verification in the. Structured data warehouses, or we might even go further just about the structuring of warehouses to, to have these universal data models. And once we have these universal data models, it's our conceit that you'll get therefore some much better output.
And then the implementations, like you say, can still be exploratory because they're fundamentally probabilistic. You don't wanna be running a power plant off a large language model. You don't wanna be designing an airplane off a large language model. So it's important to be thinking about the ultimate implementations in a real world environment where a portion of the system may need to include a large language model, and a portion of the system will benefit by including a neuros symbolic deterministic ai.
Yeah.
[00:12:08] Understanding Neuro-Symbolic AI
And real quick before we move on, could you help me understand like neuro symbolic AI better? I haven't really heard that term too much. Well, so symbolic ai o often had its apogee in the early eighties, but ran out of steam because people couldn't scale it. The, the, the popular expression of this was in expert systems, famously with IBM's Watson, which was just a, a, a marketing faint, as they, you know, managed that into oblivion.
We have learned to scale symbolic AI through the use of a domain of mathematics called category theory. With category theory, which is the type of meta mathematics, we now can infinitely scale the deterministic or, or symbolic ai. We say the neuro part, neuros symbolic is the, the representation that it, it ultimately will include a probabilistic portion and a deterministic portion.
The best systems will have both. So there we have symbolic AI and a neural net part. So neuro symbolic, neuro symbolic ai. Okay. That's helpful. Thanks.
[00:13:13] AI Regulation and Safety
Well, you mentioned in your intro that you were a White House advisor on ai, so I'd love to get your perspective on some things regarding AI regulation. So back in, I think it was September, 2024.
There was a California bill on safe and secure innovation for AI models that ended up getting vetoed, but I'm curious what you thought of that regulation and what it means for AI regulation moving forward. I was one of the people on record as not being a fan of that bill. Not a fan at all. It probably, you know, represents the, the worst of, of San Francisco politics and California politics, which is this type of, I don't know, I might say arrogance, that we are the, not only the center of the world, but somehow the world.
And, you know, it just, it's the wrong place to be originating legislation like that. And it, it was shown in this particular attempt at, at a bill, which would just have these magnificently damaging outcomes. The regulation of AI is a, is an important conversation, all of us. Need to be part of this conversation about how we want these automation technologies to show up in our world, there's a variety of ways in which they can be doing good and harm.
We know many of these already. We know many of these places where they can do good and do harm. But yeah, we all need to be part. Of, of that conversation and be gentle around the regulation. That's not to say not regulate, but we need to be gentle because the, the danger. Of overregulation in this case is that it entrenches incumbents, firms like mine small, fast growing, don't have an infrastructure to support a a, a large regulatory department.
We need to remain nimble and we need to conform to the existing rules as we understand them. You know, encoding and freezing in amber some current state as of of our understanding, and then creating an infrastructure around to support that is, you know, often just counterproductive to innovation. That's a general statement.
So in particular on this bill was the inability. To articulate any degree of a solution or quantification of an efficacy of this, you know, how to do what the bill claimed was, was just really left as, as something, it's like an exercise for the reader. You guys go all to figure it out. I'll just say what the world should look like, you know, based on some, some other people with whom I work and their, their opinions of how powerful California should be in, in the world of regulating ai.
But. We don't know how, how you're actually gonna do that. Therefore, this is an important point. We can't say how big a deal this is gonna be for you all to be implementing conformity to this rule. We don't know. We can't, we just can't say, we're just gonna arbitrarily say that it's based. On pick your favorite future legislation.
It could be based on processor size or it could be based on the dataset size, or it could be, you know, based on the power of the computers that were, were brought to bear. That's is all silliness. You know, I think it's much better than regulating inputs it be regulating outcomes. That's a kind of, the traditional way that legislation is, is often formulated.
So I, I don't feel, I can't say I feel strongly about. Other regulations, but I knew this one was a bad one. For that reason, we just did not know how to implement it. We didn't know what the technical implications are. So I'd say it was, uh, not well thought out, but I can equally say that it's important that all of us get involved in the conversation.
There was some effort to. Advocate for a slowing down or a pause at AI development. Right. I saw that. I think that was just a media or PR exercise to think that you're gonna stop automation or stop the progress of automation. It doesn't even address the issue of the degree to which adversaries of the United States might continue to develop, which would be true.
Uh, obviously it's just that. Formulation of how one would stop the development of AI is a completely an open question. So in my view, a much more intelligent way to frame the, the conversation, have all those smart people, many of those signatories I respect, advocate for the acceleration of initiatives around safety.
That effort just needs to expand. The people are new to this domain of called of AI safety. It's now been around for a few years, but that needs to see new energy or, or just continued energy around AI safety so we can have a rich conversation. So better than trying to slow AI development, let's accelerate.
AI safety conversation, so we could just have both seats at the table for then the intelligent citizenry, like listeners to this, can engage with each other about what we want this technology to be in the world for all its various manifestations, whether it's in healthcare and the positive side or in in, in privacy.
On the negative side, we, we all need to be part of this conversation. Yeah, and I think you just answered what was gonna be my follow-up question of how do we balance the risks and things you talking about earlier with not wanting to hamper innovation. And so your answer to that would be to focus more on like building up AI safety rather than pausing or.
Taking apart current AI initiatives? Yeah. Megan Smith was the Chief Technology Officer of the United States during my term in the federal government. She, she had a great line where she just said, we need everyone on the field. She thought that was great. We need everybody on the field to have this conversation.
I may have had more formal education in this domain than most, and I may have had a breadth of experience, you know, from public policy and academia and, and venture capital and entrepreneurship. That's rare, but this conversation cannot be left up to me or, or people like me. It needs to include everybody, not just because I'm trying to be magnanimous.
It's that we need everybody to embrace the technology and if they're not part of the conversation, they're gonna have a degree of resistance around it or reluctance to adopt it. And we given the lifesaving potential of this to say nothing of that. Dystopia. That's, I guess, also possible the Hollywood, uh, narratives.
We need everybody to be engaged to so they can understand what's hard and what's easy to implement. You know, it's, it's often the difference between an expert and a amateur in this domain, which is the amateur can be thinking something is hard when it's actually easy, and thinking something is really easy when it's actually hard.
So only when you're engaged in the conversation can you then begin to have in your mind as you, as we all discuss ways in which we want these expressions to find purchase or be constrained to know which is which. Having everybody on the field sounds great, but it also sounds hard. Were there things like initiatives or efforts or how do we get more people involved, do you think with AI safety and efforts like that?
I, I was just at this event where I was talking to a professor of cybersecurity at a historically black college, HBU, in South Carolina. I was a little bit discouraged that she was forced by her administration to teach. I. A curriculum that seemed like it was out of the nineties, discrete math and so forth.
So I bring that up just to say that it's one example that, that I just experienced, that we have, we all experienced this general holding onto the past where we're forcing our children to be learning. Geometry and trigonometry and calculus, when really they, they will be much better served to be learning probability and statistics and category theory.
Although that kind of sounds highfalutin, it's not that hard that that's the type of place I. We need to be having these conversations. Just what is the future? Not teaching trigonometry and calculus just because that's what we've done for the last 50 or a hundred years. We need to orient to the future, you know, how do we have people engaged is, you know, have this conversation be applied to people's real lives and know that they have a voice.
I try to speak broadly on this just to include more people in the conversation so that they can feel comfortable. With talking in their legislators so that we can get better bills than the last one that was proposed in California. I think if we get more citizens involved, we'll be able to do that and they, they won't be influenced just by people like me for or against I, I might.
With, with some degree of self-interest advocate for the reading of these national AI initiatives. I, I personally think they were well written where, you know, we defined me and my colleagues the future of government AI research 10 years out, you know, where, where will we look to be allocating. The US taxpayers money in AI research, you know, across the executive, you know, from, from energy to transportation to defense.
I, I think those are really interesting documents that they define where, or at least the vision for tax money going, going to research, and that that could, that's a good place to start. There's long term. Views of AI like that. And then of course there's medium term and, and even short term views where you could just be following daily news.
The critical part about getting people involved is just to be experimenting with ai. One of the benefits of large language models, it was not available to me when I was in the federal government, was just ha having technology that, that everybody can relate to, that's certainly present with large language models.
Yeah, I think that was part of the thing that really helped chat GBT catch on so quickly, like people could go in and relate to it and see their use cases solved really quickly or chat with the model and it just exploded when it was first released. It did indeed. Fastest ever. Yeah. The infrastructure of database integration wood does not experience the same quick grade of adoption.
Shocking. Deep. Deep in the infrastructure, and while people get involved with large language models and experiment with at least a four leading models, perplexity, Gemini. Chat g, PT obviously, and Andros, Claude. Those are probably the four big popular ones right now. Claude's my current favorite, engaging with all four of them.
I can offer that large language models as an AI architecture are not the squon of ai, so we will have other AI architectures emerge over the next decade. Definitely.
[00:24:19] The Future of AI: Utopia or Dystopia?
I think that's a nice segue into the final thing I wanted to ask you about, which is just what's your perspective on how AI can lead to a better future?
Continuing what we're saying, AI will, uh, the utopia of our dreams or the dystopia of some Hollywood screenwriter only in the degree to which we participate. I remember President Obama had an oval shaped carpet, of course, in his office, and along the rim of that was a quote that was at least credited to Martin Luther King.
The long arc of history bins towards justice. This is a quote that was inscribed on this carpet, and President Obama reminded us the, the unsaid edition, not without our involvement, the long arc of history bins towards justice, but not without our involvement. I offer that here in relationship to ai, which is the, the future of AI can fulfill the utopia of our dreams, but not without our involvement.
That's a great line to end on for sure.
[00:25:18] Conclusion and Farewell
Thanks for coming on and for sharing your expertise. I love all the examples that you gave from your time at the White House and then building your company, so it's been really fun to, to learn more from you. Thank you. It was kind. This has been a good time. Thanks for listening.
To learn more about Top expansion in this episode and connect with Eric, head over to our show notes on alteryx.com/podcast. And if you like this episode, leave us a review. See you next time.
This episode was produced by Megan Bowers (@MeganBowers), Mike Cusic (@mikecusic), and Matt Rotundo (@AlteryxMatt). Special thanks to @andyuttley for the theme music track, and @mikecusic for our album artwork.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.