Close Menu
  • Home
  • Life Insurance
  • Auto Insurance
  • Home Insurance
  • Health Insurance
  • Business Insurance
  • Travel Insurance
  • Specialized Insurance
  • Insurance Tips & Guides
Facebook X (Twitter) Instagram
Insure GenZInsure GenZ Friday, March 27
  • About Us
  • Contact Us
  • Disclaimer
  • Terms & Conditions
  • Privacy Policy
Facebook X (Twitter) Instagram
Subscribe
  • Home
  • Life Insurance
  • Auto Insurance
  • Home Insurance
  • Health Insurance
  • Business Insurance
  • Travel Insurance
  • Specialized Insurance
  • Insurance Tips & Guides
Insure GenZInsure GenZ
Home»Health Insurance»The Missing Ingredient in Health Care AI? Community Voices
Health Insurance

The Missing Ingredient in Health Care AI? Community Voices

AwaisBy AwaisMarch 27, 2026No Comments28 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Copy Link Email
Follow Us
Google News Flipboard
The Missing Ingredient in Health Care AI? Community Voices
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

JOEL BERVELL: Tech optimist Dr. Ivor Horn is my guest on this episode of The Dose. Dr. Horn is a pediatrician, professor, tech executive, advisor, and board member to some of the most high-profile health care and corporate entities in the world. But the title she claims most proudly: tech optimist. And in today’s age, saying you’re a tech optimist really means something, because we’re living through the hype, the hope, and the unintended consequences of rapid technological innovation every single day.

And when it comes to health and health care, those consequences become personal. I see it in residency, in the pulse oximeter readings we rely on, the monitors that never stop beeping, electronic medical records that shape how we think about patients. Technology doesn’t just sit in the background. It shapes the decisions we make. How trust is built in everyday health outcomes at a time when health care options feel like they’re shrinking for many Americans, and the promises and threats of tech are expanding. The real question becomes what kind of future do we actually want to build?

Dr. Horn, thank you so much for joining me on The Dose.

IVOR HORN: Thank you for having me.

JOEL BERVELL: And I was just saying the last time we spoke to each other was in person years ago at South by Southwest in Austin, Texas. So it’s great to be reconnected with you.

IVOR HORN: Likewise.

JOEL BERVELL: So, you sit on the board right now of Boston Children’s Hospital, and you recently served as the chief health equity advisor at Google. So your insight across the health care and technology spaces are broader than most people. So I want to start the conversation with some insights into what you are thinking about in this moment, where AI’s presence feels omnipresent — in boardrooms, in startups, in clinical workflows. From where you sit, what are your top-line concerns for patient care right now?

IVOR HORN: Yeah, no, thank you so much for that question. I think what I’m thinking about now are some broad categories that then come into patient care and thinking about patients. I really think about the importance of critical thinking skills, and that means critical thinking skills both for us as providers, but for patients and families who are caring for their loved ones, and the decisions that we make as clinicians and providers in the health care system. But how do people make those decisions outside the four walls of health care as well?

And the other one for me, I’m a researcher, so I spent over two decades in academic medicine. So for me, the significance of rigorous research in these moments is so critically important, and for us to understand the importance of conducting rigorous research. And this one might surprise you, but the other thing for me is the role of community.

JOEL BERVELL: Mm-hmm.

IVOR HORN: The role of community in being there for people, and people being there for each other, in those moments when they have questions. And I think about it for us as people who are doing technology work, people who are doing the research, people who are using technology.

How are we utilizing the knowledge and wisdom of community to help us in that decision-making process? For me as a primary care provider, when I was seeing patients, like the community really played an important role in how I understood my patients in context. How do we translate that into technology? So those are some of the big things that I think about under the big umbrella of responsible innovation in health.

JOEL BERVELL: Absolutely. I love each of those. And I think, like you said, I think I’m a little surprised. I would’ve not maybe thought of community in the first place. When you think about technology, we often see them as separate. But I love the way you frame that as needing that community still there in order to inform everything that we’re thinking about.

And right now we’re seeing that about $70 billion is flowing into health care tech. A lot of that is AI-centered or -supported. I think there’s something like 350,000 new applications, myriad projects. From your perspective, is the capital flowing to the right problems and are the right partners involved?

IVOR HORN: You know, is it flowing to the right problem? Is it flowing to the right partners? It’s really a resource issue. The people who are thinking about it are the people who can afford it, who can afford to develop the technology, apply the technology, monitor the technology, assess it, do the analytics. Those things are really important in, sort of, who’s minding the innovation? And oftentimes I say it starts with . . . it’s kind of those people who see how it financially benefits. So you think about large technology companies who have the financial capability to build large models, to do large projects, and move fast. You think about pharma and life sciences, they have the financial resources to do the work.

And that’s not new in AI. That’s innovation in general, right? We see that in health care, we see innovation. The MRI did not start in the community. Those who are also thinking about it and who are really concerned about it are also those who are supporting communities and who are working with the most vulnerable populations. So, you’re thinking about state and local public health organizations and departments who are thinking about how can we be able to utilize this technology when and where we can, with the financial resources that we have, to impact and improve outcomes and reduce risk. But what are the costs of adopting that technology and the inequities that can occur there?

JOEL BERVELL: I kind of want to pick up on that last point you were making about clinicians, especially at underserved community programs, for example, and whether . . . are these individuals, clinicians and patients, meaningfully shaping what’s being built or are they being asked to kind of adapt to it after the fact?

IVOR HORN: Oftentimes you see this work happening at large academic health centers and academic medical centers, and I think that’s the nature of the resources that are available to folks. So my hope is that we figure out a way to change that in creating that opportunity. I would have to say, I would love to say they’re at the table, they’re doing their best to get to the table and be a part of the table, and building their own tables. And that’s where community comes in. They’re building in community to say, how can we work together to pool our resources to actually do some of this work, but I don’t fully see it yet.

JOEL BERVELL: You also said something that this isn’t new what we’re seeing, that there was the MRI before, right, and how it was used. What lessons should we be carrying forward from earlier digital health missteps as we move deeper into this AI movement?

IVOR HORN: Yeah, no, that’s such a great question. And this takes me back to when I actually started at the intersection of health and technology. And the reason I got into this space is because I was serving my patients in Anacostia and I saw them with their flip phones, and they were coming to me saying, Dr. Horn, I know my kid looks fine now, but here’s what the cough sounded like, I recorded it on my phone. Or here’s what that rash looked like last night. And for me it was like, can you please build for my patients? If you build for my patients, you will build a better product for everyone. If you build for those who are most vulnerable, you are going to meet those needs.

The opportunities that we have around adoption in health care and in health systems, it really accelerated during the pandemic. So we have adoption in hospitals and in health systems. We have things like ambient listening, ways that we are making it better for clinicians so that they’re able to spend more time with their patients.

And my hope is that we can make those things more affordable and create those opportunities where those tools are available to clinicians who don’t have the same level of financial resources. An example of that is Doximity GPT. So Doximity has created this tool available to clinicians. Taking a resource and making it more broadly available and making it more equitably accessible is critically important for us to change the paradigm from the way that we did things when we launched digital health and the adoption of digital health.

JOEL BERVELL: I love that you give that exact, that specific example. But it leads me to wonder, how are we assessing these tools right now? Is there an equivalent to peer review, some type of standardized framework that ensures quality, safety, that there’s clinical competency? Or are we relying primarily on kind of market validation to see what hospital systems or individuals are adopting themselves?

IVOR HORN: Yeah, I think there is a lot of work being done from the research perspective. I think clinicians are engaged in the product development process in ways that had not necessarily been true 10 years ago. So you have clinicians who are sitting at the table having those conversations about what they build and understanding what they build throughout the product development lifecycle. And I think that’s really an important inflection point for us. So for me, that’s a good thing.

In addition to that, we also have researchers who are thinking about the rigor of the research. And I separate that out from research that is really impacted by the industry, and is really research coming out of industry that comes out fast. It says, here’s a benchmark, take this benchmark. For me, it’s like, okay, let’s step in and let’s do some peer-reviewed research and start having those conversations. And I think people are recognizing the importance of doing that as well.

Examples, I would say folks like . . . I look and see what Leo Anthony Celi and his team and his lab are doing at Harvard and MIT. Because I know they’re asking really critical, rigorous questions about the adoption, the implementation of research. They’re looking at how it impacts on performance. And those are really important. An important example of how do we begin to think about that rigorous research. The folks at Stanford, Nigam Shah and that group absolutely are doing the work. The models that they’re building within their systems, I think are very important. Obviously the work that we do at Boston Children’s, John Brownstein and his team have been doing work a long time. Before the AI uptake, they were doing work in digital health and very much early adopters of collaborating with the clinicians and the researchers at Harvard as part of that work. So there are pockets of folks who are doing that work. For me, what I think is a gap that I would love to see is I would love to see more engagement with community, to help give more context for models that we’re building.

JOEL BERVELL: And in the idea of community, I guess once a tool reaches the market and it’s out into the community, is there an accountability process? You’ve mentioned some of these kind of different silos of individuals that are looking at it, but is there an actual accountability process if something goes wrong, per se?

IVOR HORN: Yeah, I think we are at risk for things going wrong. I think when we look at the governance, when we look at the implementation of technology within systems, I think systems are working really hard to create checks and balances. And organizations that have the capacity to do that are doing that work.

Those are some things that we have learned. I think understanding the risks is still a part of the work that we have to do from a governance perspective. How are we asking what risks, you know, what are the protections? What are the safety, what are the quality protections? How are we, like, what’s the process before we adopt a tool, and how do we monitor that tool? When the baseline foundational AI model gets updated, how does that impact performance? What happens with drift? And every organization doesn’t necessarily have the capacity to be able to answer those complex questions.

JOEL BERVELL: Yeah. When we think about academia specifically, is it a meaningful part of the developmental process of the next generation of health care tools or is academia too restrained by its own systems or procedural requirements in order to fully participate? We think about Silicon Valley and the move fast and break things versus the academia of slower and research-backed and making sure that you’re having peer review. So, very curious about what you think about academia’s role in a system that often can move slow.

IVOR HORN: Yeah, you know, I was in academia and I left academia, and I was in in big tech and doing research in big tech. And one of the things that for me was really important is we had to think about peer review and the rigor of peer review, sort of like publishing something without peer review, like for me, I’m still going to read that article with, with the same critical eye that I would as if I were a peer reviewer for a publication. And I think there is an importance of doing that.

Do I think academic research and the research review process needs to find ways to pick up the pace? Absolutely. And I think that’s one of the benefits actually of AI. We have the ability to create more efficiencies in the health services research process by using the technology. I think there’s a need for education. There’s a need for training of the workforce, both on the researchers who are doing the research and on the folks who are doing the peer review. We’ve all seen the things that kind of slipped through. We’ve all heard the stories of the folks who put the content in white that wasn’t read, but when the peer review comes through and when they run it through the model, it makes it more positive. So we still have to make sure that we have humans in the loop as part of the process. And at the same time, I do think it is important for us to make sure that as we’re maintaining relevance, from an academic perspective, that we are finding ways to create efficiency and move things forward faster.

JOEL BERVELL: And as AI moves from the diagnostic realm into more clinical applications, what are you seeing as some of the biggest areas for potential?

IVOR HORN: We’re already seeing some of them. One, we’re seeing the work to reduce the workload for clinicians. The numbers are small, but clinicians seem to say they feel it. And that really matters. I think the ability to consolidate information and data in the clinical process is very important.

I think a big piece of the work is in research and development. Just the research and development and drug discovery process I think is going to be really important. One of the things that I really am excited about is the clinicians that are involved in the implementation process. I think we need to have more of the care team involved, but the fact that clinicians are there and part of developing the workflows, I think is really a very important part of the process, that it wasn’t as engaged in the past as it is now.

JOEL BERVELL: I think I can second the feeling, the pressure lifted, as a busy medical resident and having to keep a lot of information, whether it’s just being able to easy, make charting a little bit easier. Or even I think the biggest thing I’ve seen a lot is finding information, right? Like knowing, okay, let’s say I have a patient that comes in, they have adrenal insufficiency, but they also have all these other conditions. At one point I would have to manually find that research study that talks about how do I think about all these other comorbidities in the context of what to do, but I can just use an AI chat bot or something that pulls it up in an easier way.

So absolutely, I think I agree in terms of the clinicians need to be at the forefront and feeling the exact weight of having so much more time to actually then focus on patient care. I’m curious about the challenges you’ve seen on the flip side as we move into this clinical application.

IVOR HORN: Yeah, the challenges are two things that we’ve sort of talked about: the financial resources for less resourced care teams, whether it’s a community hospital or federally qualified health center or rural communities or a primary care provider, you know, trying to remain in solo practice. The ability for them to adopt this technology in a way that is useful.

You know, you ask a physician in private practice, they’re like, I’m just trying to keep up with the things that are required of me. How can you ask me to one, afford, and then two, adopt this new tool? Because many of those are like small businesses. I think it is so important because oftentimes the community provider . . . can you imagine the ability for them to create a model where they can add the context of what they understand about the community that they serve in that model, and the social drivers of health as they’re providing care? That’s really powerful, and it’s more than just purchasing a tool from a vendor. It’s really about who’s going to help me do the analytics? Who’s going to help me create that model? Who’s going to help me sustain that model and update that model? That’s the thing that concerns me.

I’m a primary care provider. I’m a primary care community-based pediatrician at heart, even though I haven’t seen patients in a while. So for me, I’m always thinking about the massive amount of information that comes to a primary care pediatrician that could benefit from technology in that way.

JOEL BERVELL: Mm-hmm. And as innovations are coming about, and I think you’ve hit on this a little bit, but are these innovations addressing the most urgent gaps in our health care system? So we think about primary care access or maternal health. Or are we seeing more of a proliferation of tools that feel more like they’re duplicating things or disconnected from the areas of greatest need, or just incremental?

IVOR HORN: Yeah. There are easier places and spaces where technology can be developed, and developing at the front line — and, as we call it, the last mile — is really hard. And when your goal is to go fast and build new things, and to be honest, compete with others who are building things so that you can build before they do, or get some way ahead, like choosing those hard problems to solve on the front line that require you to engage with partners, engage with community, hear things that don’t quite fit neatly in your four-by-four box or in what you want to build. So they’re building the things that they can build with the least amount of friction.

JOEL BERVELL: Which isn’t necessarily the things that affect the most people. They’re the things that we know we can get done. And like you said, it’s often the hardest things require the most collaboration between people, which this technology like this doesn’t often lend itself to. It can create silos or competition that can lead us not to be looking at other solutions. I really appreciate that.

I’m curious. Who is at the table right now when it comes to all these conversations that we’re having?

IVOR HORN: Depends on what table you’re talking about.

JOEL BERVELL: I think what I’d say the development of AI and using new innovations in clinical environments.

IVOR HORN: I would have to say the folks that are at the table are more than they used to be, but less than I would like it to be. So we definitely have big technology companies — Googles, Microsofts, Open AIs, Anthropics — they’re all there. They’re all building things. We have academic health centers, academic medical centers. We have the Stanfords, we have the Harvards, and we have other academic medical centers who are building as well and doing really great work. I think the folks who are really working to try to be at the table are community health centers — the National Association of Community Health Centers, Federally Qualified Health Centers — to really begin to work and collaborate with others, to be at the table, to broaden that perspective.

I think what’s really interesting that I didn’t think about when I was in academia or when I was, like, seeing patients every day was the entrepreneurs and the innovators who are building in those spaces that are not really meant for big tech to build. And that’s okay because we need a whole ecosystem of folks building. And so there are entrepreneurs who are finding needs that are not being met in other places, and so they’re building there. And I think that part is really important.

There’s an organization called Health Tech for Medicaid, and Health Tech for Medicaid is focusing on entrepreneurs who are building for the Medicaid population, thinking about marginalized populations, and helping them to understand the complexities of what it means to build for those communities and to meet all of those needs. And I think having those folks at the table is really important as part of this process.

JOEL BERVELL: Absolutely. You’ve in your career already have been bringing different players to the table together. You were involved in the development of SCIN, S-C-I-N, the Skin Condition Image Network, and that was Google’s open access dermatology image dataset that you helped create in partnership with Stanford Medicine, empowered by AI. It’s a project that meant a lot to me as a medical student and now as an early physician, especially given the gaps in representation in dermatology training. And I’m hoping if you can talk about how that project even first came together.

IVOR HORN: This was one of the things for me. I appreciated the really smart people that I got a chance to work with every day. And as a researcher, I get to ask questions. And so we had a project where we had done something called DermAssist. It was an initial effort to look at skin conditions. And as part of that study, once they presented it at Google io, researchers came back and dermatologists came back and said, wait a minute, you don’t have enough representation across like skin types and skin tones to actually do this work. That was really shortly after I came to Google and I said this is something we should be able to work on and solve. And what was also really powerful about that is we had a whole other group of folks who were looking at skin tone and creating this framework around skin tone related to marketing and images for something completely different.

The Stanford people were the ones that raised their hand and said, yeah, we need to work on this. So it became a really easy collaboration for the researchers at Google and the researchers at Stanford to say, let’s see if we can do this, let’s see if we can get people to volunteer and share this information. And so when you talk about community, when people sort of say we want to be part of the solution and building something, that was really important.

The other thing that we did was a research study and we got skin conditions from all over the world as part of a collaboration with a group out of Seattle. And that’s where we created the HEAL framework, and it’s a Health Equity Assessment of machine Learning performance, and we used skin conditions as part of that to create a performance evaluation for, is the machine learning model that you’re building creating more equity or worsening disparities?

And so just that little, someone raising their hand and saying, oh, no, no, no, you guys didn’t quite get that right, was really important. But it was a matter of stepping into the void and saying we didn’t get this right. Let’s not just not not do anything, let’s lean in and fix it. And I think the ability to do that at that moment in that time was really important. And I think now the ability to say, we didn’t quite get that right. Hold on a minute, let’s lean in. I don’t know how easy that is now.

JOEL BERVELL: Thank you for sharing that. I actually remember when that pushback was happening. I think I may have added to the discourse by making a video about it and how there wasn’t enough images.

But then also seeing how the changes that were made over time from the project you talked about with the image equity project that was happening with the Google Pixel phone. I remember that happening. And so all of those kind of connections, there was so much, like you said, more that came out of it downstream where there was real tone, right? Which then became like kind of a marketable thing as well, but it made sure that the new Google Pixel phone was able to better see darker skin tones, which then would actually improve when someone took a picture of something to then be able to look at it and analyze it for AI algorithms. And that helps not just one company, but all companies when we look at that new technology that sees people differently. So thank you for, I mean, being there at that time, for being willing to say, hey, let’s dive deeper. Because, like you said, the downstream effects are so vast when that happens.

I’m curious, what are future similar projects that are, are existing, or where do we need to widen the lens? You’ve given such a good example already.

IVOR HORN: I think as we talk about AI, we need to widen the lens on who’s a part of the evaluation process. When we did the HEAL framework study, first of all, it was great because a bunch of people raised their hand and volunteered to do that work. But for me it was very critical that we sent it for peer-review publication, and they did and it made our work better. And I think that was really important, the rigor of that. Because then it could be applied in ways that, you know, make products better.

I think where we need to begin to expand on this, as we begin to think about AI and then making sure that we are including community beyond the internal researchers and the organizations that can afford it, but also partnerships in collaborations with, whether it’s minority-serving medical institutions or HBCUs [Historically Black Colleges and Universities] or Latino undergraduate organizations — all of those groups need to be brought in.

We need to think about how are we engaging rural communities? How are we thinking across the spectrum? Because people who experience disparities don’t look a certain way. It’s your neighbor, it’s your friend, it’s your cousin, it’s your auntie, it’s your grandparent. This is something that you do so well in that sort of education and information. And we think everyone has access to information and understand things, and it’s not true.

And it’s hard for some folks to know whether something is real or it’s AI. And that information and misinformation and disinformation can be dangerous when it comes to health care. And if you are a family and your child is sick and you’re like, is that cough bad enough to go to the emergency room? Because if I go to the emergency room, I can’t pay my light bill. And I go and I open up ChatGPT, and I ask ChatGPT a question, and ChatGPT says, oh yeah, that’s fine, you have 24 to 48 hours. And the reality is that’s not the case. The impact is very significant. And so the importance of us working with these models and making them better and having accountability and educating the community is . . . all of those things are critically important now as part of this work.

JOEL BERVELL: There’s so many gems in what you said. I think we used to joke that we would get these spam calls and like that our parents would fall for them. But now everyone’s falling for things like AI misinformation, disinformation, especially when it comes to health care, and it shows that we’re all susceptible to it.

And I kind of want to ask about iterative thinking. We hear a lot about iterative thinking in technology, but health care systems are deeply entrenched with legacy infrastructure, existing hierarchies. There’s concentrated decision-making power, right? And in that reality, is there resistance to truly rethinking systems, especially when it comes to kind of confronting and eliminating these biases that we’re talking about right now?

IVOR HORN: One of the things that I am quite honestly excited about is I’m excited about the new generation of physicians taking over. You know, folks need to get out of the way. There are people who are tech optimists like me who are like, let’s do it. Let’s figure out how to build agents and let’s figure out how to build agents better. Yet at the same time we have digital natives who are coming into health care. And I do think it is really important for this generation of caregivers, of clinicians to not wait for someone to give you permission to say we can do and we can build things.

And that’s why working with entrepreneurs for me is such an uplifting thing. I’m like, let me tell you the old-school stuff so that you don’t do that. Someone recently said to me, I’m trying to do this project and I’m going in to work with health systems because I’m trying to fill a gap that they even know needs to be filled, but I keep running against this bureaucracy, you know, and they’re asking me, Ivor is that real? And I’m like, yeah, it’s real. Let’s have a conversation about that and let’s figure out how we can effectively disrupt ourselves in a way that is positive, that is driving health outcomes, that’s driving improvements everywhere. Because we don’t have the time to just keep sort of saying, well, this is just the way that it was done before. We don’t need another pandemic to force us to move faster. We should do it in a better way.

JOEL BERVELL: I want to end with this question: You call yourself a tech optimist, and in a moment where trust really does feel pretty fragile when we think about institutions, data, in medicine, especially, what does responsible optimism actually look like to you? And what would have to be true five or 10 years from now for you to say we got it right?

IVOR HORN: Responsible optimism is about not being afraid to ask the hard questions and the unpopular questions, and to say the unpopular thing of that’s not right, or we can do that better, let’s do that better. I think that is the optimism.

And I say this from a technology perspective, really, because we’re so busy in a race to beat the next person that we are putting on rose-colored glasses when we should be using our critical thinking skills to ask ourselves the hard questions and say when we’re not quite sure about things.

JOEL BERVELL: Absolutely. Well, Dr. Horn, I want to say thank you so much for this incredible conversation. It’s been a long time coming, and I want to say thank you for your optimism too, I think especially in a moment that demands both imagination and accountability. You’ve already been doing the work. I know you’re going to continue doing the work. And thank you for inspiring people like myself who hopefully will take up the mantle and continue all the work you’ve already started.

IVOR HORN: Thank you so much.

JOEL BERVELL: This episode of The Dose was produced by Jody Becker, Jesús Alvarado, and Naomi Leibowitz. Special thanks to Barry Scholl for editorial support, Matthew Simonson for recording assistance, Jen Wilson and Rose Wong for art and design, and Paul Frame for web support. Our theme music is “Arizona Moon” by Blue Dot Sessions. If you want to check us out online, visit thedose.show. There, you’ll be able to learn more about today’s episode and explore other resources. That’s it for The Dose. I’m Dr. Joel Bervell, and thank you for listening.

Care Community Health Ingredient missing Voices
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Telegram Email Copy Link
Awais
  • Website

Related Posts

Give and Take: Federal Rural Health Funding Could Trigger Service Cuts

March 27, 2026

10M could lose Medicaid due to work requirements, more frequent eligibility checks: study

March 27, 2026

One or Two Health Systems Controlled the Entire Market for Inpatient Hospital Care in Nearly Half of Metropolitan Areas in 2024

March 27, 2026
Leave A Reply Cancel Reply

Our Latest Blogs

Ohio Contractor Sentenced to 17 Years in Prison on Fraud Charges

March 27, 2026

Physical AI boom heralds deeper disruption for insurers than generative AI, EY warns

March 27, 2026

Give and Take: Federal Rural Health Funding Could Trigger Service Cuts

March 27, 2026

Marsh Risk Names Zafiriadis to Lead New Service Delivery Practice

March 27, 2026
Recent Posts
  • Ohio Contractor Sentenced to 17 Years in Prison on Fraud Charges
  • Physical AI boom heralds deeper disruption for insurers than generative AI, EY warns
  • Give and Take: Federal Rural Health Funding Could Trigger Service Cuts
  • Marsh Risk Names Zafiriadis to Lead New Service Delivery Practice
  • The Missing Ingredient in Health Care AI? Community Voices

Subscribe to Updates

Insure Genz is a modern insurance blog built for the next generation. Subscribe it for more updates.

Insure Genz is a modern insurance blog built for the next generation. We break down complex topics across categories like Auto, Health, Business, Life, and Travel Insurance — making them simple, useful, and easy to understand. Whether you're just getting started or looking for expert tips and guides, we've got you covered with clear, reliable content.

Our Picks

Ohio Contractor Sentenced to 17 Years in Prison on Fraud Charges

March 27, 2026

Physical AI boom heralds deeper disruption for insurers than generative AI, EY warns

March 27, 2026

Give and Take: Federal Rural Health Funding Could Trigger Service Cuts

March 27, 2026

Marsh Risk Names Zafiriadis to Lead New Service Delivery Practice

March 27, 2026
Most Popular

Ohio Contractor Sentenced to 17 Years in Prison on Fraud Charges

March 27, 2026

Physical AI boom heralds deeper disruption for insurers than generative AI, EY warns

March 27, 2026

Give and Take: Federal Rural Health Funding Could Trigger Service Cuts

March 27, 2026

Marsh Risk Names Zafiriadis to Lead New Service Delivery Practice

March 27, 2026
  • About Us
  • Contact Us
  • Disclaimer
  • Terms & Conditions
  • Privacy Policy
© 2026 Insure GenZ. Designed by Insure GenZ.

Type above and press Enter to search. Press Esc to cancel.