Product in Healthtech

Jennifer Geetter of McDermott Will & Emery

Episode Summary

Jennifer Geetter, a partner at McDermott, Will and Emery, shares insights on AI and data governance, compliance issues related to AI and privacy, and how state and federal guidance can impact AI innovation in healthcare.

Episode Notes

Jennifer Geetter, a partner at McDermott, Will and Emery, shares insights on AI and data governance, compliance issues related to AI and privacy, and how state and federal guidance can impact AI innovation in healthcare.

00:00 - Introduction

01:02  - Jennifer's background and role as a digital health lawyer

04:11  - Types of clients / legal needs in digital health

09:37 - COVID's impact on healthcare delivery and digital health trends

14:41  - Challenges faced by digital therapeutics and the future of digital health

19:21  - AI capabilities, regulatory guardrails, and considerations for healthcare companies

24:55  - Specific applications of AI in clinical contexts and physician support

28:38  - Balancing AI risks and benefits in healthcare

33:53  - AI governance, regulatory landscape, and the importance of public trust

 

Jennifer Geetter: https://www.linkedin.com/in/jennifer-geetter-6107406/

Chris Hoyd: https://www.linkedin.com/in/chrishoyd/

For the full YouTube video: https://youtu.be/l6-F2Stemv0

McDermott Will & Emery Website: https://www.mwe.com/

 

Episode Transcription

Chris Hoyd  0:08  

Welcome back to Product in Healthtech - a community for health tech product leaders by product leaders. I'm Chris Hoyd, principal at Vynyl. Today we're talking with Jennifer Geetter. A partner at the law firm McDermott, Will and Emery. Jennifer advises healthcare and digital health clients on the legal and compliance issues associated with bringing innovative healthcare solutions to providers and patients. Jennifer shared some great insights today about how innovators should think about AI and data governance, compliance issues related to AI and privacy, and how the patchwork of state and federal guidance on AI can both inhibit and in some cases, accelerate AI innovation. Let's jump into that conversation. Hi, Jennifer, welcome to the podcast. I've been looking forward to this conversation since we met at ViVE a few months ago. Let's just start with your background and how you got to your sort of current position, can you talk us through your journey a little bit?

 

Jennifer Geetter  1:02  

Sure, it's good to see you again. And thank you again, for the records they were, they were a big hit.  So I would say I've been just very lucky. No one starts out 20 years ago and says I'm going to be a digital health lawyer or an AI lawyer. Those careers didn't exist. By I have a sort of saying - follow the data. So I did research compliance that led to genomics, that led to large data related work. And that led to AI starting around, maybe 2016. So I've just been lucky to work on issues at the intersection of healthcare, and digitalization and, and see them both in my life as a patient and my life as a legal practitioner.

 

Chris Hoyd  1:49  

So just follow just sort of your own interests and your own conviction about maybe what might be interesting. And all of a sudden, you're at a leader in the field.

 

Jennifer Geetter  1:57  

If you say, so I'll take it.

 

Chris Hoyd  1:58  

Can you describe for our listeners, who maybe are coming more from a product or operator perspective, may not know what the day to day of a lawyer is like, can you talk us through what your day to day is like and what you love about it?

 

Jennifer Geetter  2:18  

Yeah, sure. So I would say I function a little bit like product counsel, but just for a lot of different products. So the issues that I work on privacy data strategies, which is related to privacy, but it's really about how you use data as your asset within an organization, research, innovation, public trust, which is something I hope we get to talk a little bit more about, because I don't think we talked enough about it. These are all embedded in how you develop a product - its features, its safeguards, how you talk about your product, and also your customers, they could be health plans, they could be hospitals, they could be patients, they all have both regulatory needs and normative needs, you know, expectations about how products will work in their environment. Those are difficult to reverse engineer. So I typically work with clients, really, from the beginning, through the end, to try to identify these issues as we go and make risk-based decisions for how we are going to deal with them, especially because innovation in products is happening faster than innovation in law, in many cases. And so trying to understand what should the answer or what are the possible answers is really, in my experience, part of developing products that will have stickiness with with your customers. So that's my typical day, it could be on an actual product, it could be on a service, the clients range from traditional health care clients - but also, I think this is common for many digital health lawyers, retail companies or other folks that are new to health care, but are taking the expertise they have in another market, and seeing if it can be transferred or made applicable in healthcare. So in a given day, I know I might work on a dozen different things. And that definitely makes it exciting and hectic.

 

Chris Hoyd  4:10  

Very cool. Okay, so are there any examples you can point to? Or maybe kind of generic or genericized examples of you anticipating the regulatory issues for a client, that kind of working out well for them, and then maybe expanding into a market or a new round of funding or whatever it might be?

 

Jennifer Geetter  4:33  

I think sometimes there's an impression that lawyers are the party of no, and that our job is to come in, in the bottom of the ninth inning and mess it up. And actually do our best work in early innings. So our job if we think about it, right should be figuring out if yes, but you know, maybe you can't do it exactly the way you planned, but you can do it some other way. And it should be in my mind, one or two things. A reasonably fast fail. So identifying for companies product ideas that they just don't fit, for whatever reason, under the current marketplace, and that could be there's no reimbursement available for it, you may not have a commercial strategy. It may be too crowded a field, it may have privacy issues. And then also assuming that's not the case. How do you how do you pick smart risks? And I think that's really important, what risks are really worth it? Because they're essential to the safe and effective use of the product there are reasonable under existing regulatory regimes that may be a bit ambiguous. They're worth it for a pilot, you know, so that kind of balancing there are not always black and white answers in digital health. And you have to be comfortable operating with some some ambiguity. So an example would be thinking about a product or service that would require a large amount of a certain type of data. So inventorying, the data available for a particular product and service and figuring out do we have it? Do we have the rights to that data? Do we have the kinds of rights we need for this product or service? Or is it going to flounder because some of the data inputs aren't going to be there? Or another example would be everyone knows that the US health system has some hiccups and how it's designed people get their health care all sorts of different ways - thinking about how your product will be paid for, is it going to be reimbursed their standard insurance? Or do you need to look for other ways of doing it? For example, a per member per month fee structure with a self funded health plan that's interested in plugin solutions, where there may be some expertise, especially for certain types of specialty care, behavioral health, fertility and other types of services, where you might look outside your your typical network, so sort of thinking through all of those different pieces, and I work as part of a team, most digital health lawyers to serve to anticipate those types of issues.

 

Chris Hoyd  7:08  

Okay so can you talk a little bit about the the mix of clients - or types of clients that you work with? Is it mostly sort of venture-backed early stage companies? Is it you know, pharma, looking at digital innovation? Are they health systems? Like, is it just a mix of everything?

 

Jennifer Geetter  7:25  

I mean, the nice thing, I think, for me and for the team that I work with, is that we see all different types of clients. Not only does that make our jobs more fun, I think it makes us more effective as, as attorneys. We have a sense of all of the different types of stakeholders that are trying to deliver health care whether that's on the product side, or the front lines, and you need a sense of that you can have the best, most disruptive technology. But if hospitals are going to be afraid to use it, then the disruption wasn't constructive - it didn't get us anywhere. So being able to speak with clients and saying, This is what health plans are going to be listening for. This is how you can make your product or service more compelling. Or, look, we work with a ton of provider networks, this is what they're going to find a little clunky about your offering, here's how you might change it. So I think working with a diverse array of types of clients can make you're obviously a better lawyer. The early stage companies are exciting and challenging. They have a lot of needs. And it's sometimes hard for them to prioritize legal needs when they're trying to keep the lights on. So with our early stage companies what we're really trying to do is identify the gating legal items, the things that if they don't think about now, they're building their business a bit on sand, and then helping them identify those issues where when they've gotten to a further series raise they can come back to and really triaging their legal needs. So we work with very early stage companies to, you know, the largest companies that are out there, both here in the US and globally.

 

Chris Hoyd  9:04  

Very cool. Yeah. And you use the word 'clunky', which I think for a product audience is kind of maybe a trigger word. And that really does speak to your role as product counsel, it sounds like so you are really getting into conversations, not just about risk necessarily, but sort of the viability of a lot of these products.

 

Jennifer Geetter  9:24  

Yeah, it's fun, and I think it mirrors how product counsel often works inside inside companies, when they're really involved as part of the development innovation team.

 

Chris Hoyd  9:36  

Yeah, that's awesome. Okay, so I'm curious, and maybe this will be tough to sort of find the broad strokes because the mix of clients is so variable, but I'm just sort of curious over the last like couple years if you've seen any shifts in the kinds of issues that clients are are worried about or bringing to you for advice on?

 

Jennifer Geetter  9:57  

Sure. I don't think you can underestimate COVID's impact on healthcare delivery. So COVID, among many other things, served as an experiment that you could never have ethically run on a volunteer basis, which is how will - among other things - how will people get their care if by and large, they can't leave their homes. And so while there were lots of telehealth and digital health companies before COVID, it created a massive market that was essentially completely elastic. There was only growing demand. And I think we learned a lot from that experience that we're now working through in the sort of COVID, post COVID years when people now have a choice. And one thing that we're seeing is that, in many cases, people prefer a digital solution. They didn't go to see, for example, their counselor in person for years, they got used to getting their counseling, on Zoom, and we see an explosion of, of behavioral health related services. So I think, you know, one thing that we see as a trend, I say, digital health, there's no such thing as health at this point. Everything has a digital health component, even brick and mortar care, as a digital health component. So I think that's a major thing. What is the best way from my company to deliver my product or service? Virtual entirely all the way to largely brick and mortar and everything in between. Another thing that we're talking more about, and I think that's really important is social determinants of health. And that has a big impact on digital health delivery, not everyone is connected in the same way to digital health. And that's not just patients, it's also providers. So for example, I know we're going to talk a little bit more about AI later on. But when you think about AI governance, that's going to be more easily done it's hard in all cases more easily done for a large health system than a smaller community hospital that is sort of at the frontlines of health care in their community. So thinking about how we have not just the traditional disparities in health care, but I think we're starting to see a technological divide, that we are going to have to think about, and when you hear about discrimination in AI models, that's a another variety of that the data it was fed isn't necessarily representative. I think another conversation that I see companies have more and more is, did they actually build their businesses to compete in a data economy? So you know, this may not be the most exciting topic in the world. But do you have data mapping inside your business? Do you actually know what data you have? How granular it is? Is it well curated? Have you brought the same rigor to your data inventory that you would bring to your components inventory, if you are making, you know, a tangible device product? There was this huge, big data revolution, no one wanted to be left behind. And I think what we're finding is that the quality of the data, we have to power our innovation isn't always there. We're hearing from AI companies, for example, they can't get enough good data to train their models to be fit for purpose. So I think these are common themes, how do we go back to the basics in some way, and build a data architecture within our business that is fit for scale.

 

Chris Hoyd  13:23  

Interesting, yeah, not to plug my employer, Vynyl, but that's something we've started to realize, because we happen to be, you know, good at the non sexy stuff.

 

Jennifer Geetter  13:33  

Non sexy stuff, it turns out is really important. It's pretty important. And it can sometimes be hard to get folks excited about it, but it's the basic building blocks of, of digital healthcare. And, you know, you walk the halls of HLTH or ViVE and you see, you know, one amazing far out idea after another, but they're all based on having a sound structure. And so one trend that I see that I think is overdue, is asking those foundational questions, we have to think of something better than non sexy, I think, foundational, critical, critical architecture that you need highways in a country - we need that on our data front. And we do not have like an infrastructure bill for data the way that we have for other types of infrastructure, and it really shows.

 

Chris Hoyd  14:27  

Oh, that's an interesting analogy. I love that.

 

Jennifer Geetter  14:31  

Road aren't sexy, either. Right? It would be really hard to get around without them.

 

Chris Hoyd  14:36  

Okay, and that's more than just interoperability.

 

Jennifer Geetter  14:40  

Way more. Interoperability is certainly a piece of it. And, you know, I think there's analogies that you could draw to make that real for listeners, but it's only one, you know, one piece of it. There are there's so many other things that we need, beyond interoperability that we don't have and some of those are a reflection kind of our state versus federal system. Some of those are privacy artifacts. But some of this is you know what I started to call data nimbyism, we all want the benefits, none of us want our data to be used for this, there's a whole list of of issues that, again, are going to call them infrastructure. Because I think that sounds better, Chris, that folks need to think about. And we spend a lot of time on them.

 

Chris Hoyd  15:24  

Interesting. So and I suppose that's very context specific based on the client, whether it's an early stage company or a health system, and maybe even contextual based on a, you know, a health systems contractual arrangement with their EHR, you must see 31 flavors of issues related to this.

 

Jennifer Geetter  15:46  

Baskin Robbins of digital health, right, or something that 31 flavors. Yeah, we see in every different way. But it is important just to recognize these are not completely sui generis questions. I mean, there are themes, and we need to take seriously that the common denominator for most of these businesses is the quality of the data. That's not taking anything away from the IP, or the people - but without the data, they aren't gonna be able to do what they want to do. And thinking about your data strategies, the interoperability piece, the privacy piece, the normative piece, the IP piece, these are essential ingredients to make sure that you can do what you set out to do.

 

Chris Hoyd  16:32  

You get so close to the business model and the viability of a lot of these companies. We kind of went through this phase, I want to call it a bubble that popped but there was a, I would say, a focus of investment and energy on the world of digital therapeutics for a few years. And it seems like that may have decreased a bit recently. I'm just curious if you came close to that at all, if you have any thoughts on where that sort of sub sector could go to find some more traction, but yeah, just throwing that out there.

 

Jennifer Geetter  17:07  

Yeah, I think it's a great question. Look, last year was not kind to digital health. And to many sectors. You know, my hunch based on nothing is that that is temporary, because healthcare is a huge single sector of the United States economy. And for reasons we've already talked about together, we are not going back to a entirely brick and mortar system, people don't want it. And it's also not cost effective. In many cases, digital health saves federal health programs and others money, you can see more patients, you don't have folks sitting in waiting rooms and so forth. That's that's a long list. So I think this will come back. But it's been tough, and digital therapeutics are some of the hardest products to get approved. If you have a consumer facing app that assists with health and wellness, that falls outside of what the FDA directly regulates your path to market isn't easy, but it's more direct. Digital therapeutics are going to be a highly regulated category, so there was always going to be a lag time as as the FDA thought about those products established, you know, safety and effectiveness standards, and they began to get to market and then get reimbursed. The one thing that's important to remember is FDA approval means that your product is sufficiently safe and effective, as determined by the FDA and consistent with your label to be marketed in the US. It doesn't mean anyone has to pay for it included in their plan of benefits. So this is a multi year process for products that are so highly regulated and may not fit neatly into an existing reimbursement code or the existing reimbursement code - and I think this is important - maybe too little. So there may be a code that would be appropriate, but ultimately, it's not commercially sustainable. So then you're going through the process of trying to get new reimbursement coding, which we help with at McDermott, but that's also a long process. So these are some of the coolest products, but they they are not, you know, overnight success stories by and large, but the I think they're coming. I don't think you can stop the digitalization of healthcare and really any respect.

 

Chris Hoyd  19:20  

Okay, so on that optimistic note, and, you know, I think we we do have a few questions on AI that I'm very curious, your thoughts on I don't want to over-AI us, but I do know you're living and breathing a lot of this stuff right now. Currently, what you're seeing out of AI capabilities, how do you see those being, you know, applied in the near term, in health tech, and how do you advise a company that's exploring those capabilities? What are the sort of regulatory guardrails in the space that they need to be aware of?

 

Jennifer Geetter  19:54  

I'm glad you asked. I think first we have to decide what we mean by AI. We're all using In this term, and it covers the gamut of basic machine learning, really, algorithmic thinking to the, you know, sci fi stories that we read about in the newspaper and sort of the robots are coming for us. That's a incredibly broad spread of technology. And so, when companies are thinking about both their internal programs, their internal compliance programs, and the regulatory frameworks that exist, the answer to that question is going to be different depending on the type of AI they're intended to use. And I think we sometimes collapse this all behind, you know, one term or one definition, but these are vastly different products with different abilities and different risks. And if you had an AI governance program that was regulating your use, or development of generative AI, and it was based on, you know, early, already, well-deployed AI, you'd be missing a whole bunch of clients issues. And the reverse is also true. If you use the safeguards that are appropriate for Gen AI for your basic, you know, administrative AI functions, you're probably over compliancing yourself, you won't get in trouble for that. But you're probably making it much harder to run your business and not getting much compliance boost for that. So I think first figuring out what kind of AI you're using and why. I think the second is, folks have heard me say this before, just be wary of of AI FOMO. You don't have to use it. If it's not the best solution for your problem, that's fine. You should use particular AI, products and services and tools when they are your best choice, both economically and for safety and effectiveness reasons. But if you have a non AI solution, that's working just fine. And you're getting your job done. That's also okay. And some of the compliance and implementation hassles we see are the sense that there people are missing out, you know, that and they have to they have to implement AI before they really need it or are ready. They don't. I think the other thing I would say is to watch the states, so many states have started to regulate AI or are in the process of thinking about how to do that, when it comes to health care, sometimes in consumer protection spaces, sometimes in health care licensure spaces, I think they feel like they need to kind of get off the bench. Because we don't have a lot of regulation or guidance at the federal level. This could look a lot like our federated privacy system, right. Where we have the federal government, we have non preempted state law and companies have struggled for years and how to have a national deployment when privacy rules differ by state. So I think really working with state regulators helping them understand how AI works, getting them comfortable with where they may have a role regulating, and where, you know, they don't need to feel like they do. One major caution I would give folks who are trying to develop AI is to mind the line, mind the gap between AI training and research. By and large research involving human subjects is defined as an activity where you interact with someone or intervene with them, like take a blood draw or give them a survey, or about whom you have identifiable data. And it's an ore. So if you are doing an analytic project to generate generalizable knowledge on large datasets, you very well may be doing research involving human subjects, if that data is identifiable, if you are training AI in certain ways to see if it can perform certain functions, you might reach a similar result. And I think because we talk about training AI instead of studying AI, I think sometimes folks are missing at least the to call the question of why or why not? It may, you know, it may be research. And then finally, I would say the IP landscape is evolving for for AI and that you need to think of as both inputs and outputs. So what are the IP considerations for the data you are using to teach your AI? And then what is the law saying right now about the perfectibility of the AI outputs. Sometimes these are I think, are getting collapsed, but they're actually two different IP questions and you really go to an IP AI expert. I'm lucky to have them here at McDermott, but that is a very technical question, but I think an important one to be asking.

 

Chris Hoyd  24:54  

So is it fair to say that there are a lot of dimensions of this technology that are - the innovation is ahead of the regulation by quite a ways?

 

Jennifer Geetter  25:03  

Yeah, the innovation is head of the regulation, because there is no way lawmakers could operate at the speed of innovators. It's an impossible task. And we have so many different innovators who are trying to do different things, I think if you if you look at the concerns at the federal level, they're, they're more focused on either generative AI, you know, concerns that AI can jump the box, essentially. And around discrimination, either AI tools that will give faulty results, because they've been poorly taught, or AI that is inherently, you know, discriminatory or dangerous from a public policy perspective, you know, impersonating someone's voice. And then when we saw this, you know, impersonating someone's voice and pretending you're a candidate, or we see this in certain types of spear phishing, and ransomware attacking where the AI makes the attack seem even more real harder to detect. So I think some of the concerns are really about where AI is acting in a nefarious way, or a dangerous way. And I think we have to be very thoughtful about those.

 

Chris Hoyd  26:21  

That was an incredible answer, thank you. I think maybe if we can, you know, as you said, there are so many versions of AI right, now that we're gonna learn a lot. One, you know, more specific application of it is, as you sort of highlighted, the generative AI applicability to capturing, empowering, summarizing, recommending within the context of clinical conversations. So whether it's something that lives in an EHR, and supports physician during a patient visit, or helps them with their, you know, charting later on. I'm curious if you've seen anything sort of specific in those contexts that you tend to advise those clients on or try to get ahead of?

 

Jennifer Geetter  27:11  

AI doesn't get tired. So there are tasks that AI can do that we should probably give it. So I remember listening to an AI discussion, and that there was a physician who said, you know, that patients prefer, they found that patients preferred prescription refill emails that were AI written because they were nicer. You know, it said things like, Hey, Chris, I'm so glad I could get this prescription in for you before you went on vacation. Whereas the physician is doing a million different things is just glad he got the prescription in for you and wrote to you, Chris, 'filled', or 'sent'. So there are certain types of tasks like you're describing that go to sort of patient management, where the thoughtful deployment of AI may improve both the patient experience and the provider experience, our providers are very burned out. COVID is the straw that broke the camel's back in this regard. And I think we need to take that burnout very seriously. And so if you have AI that you can deploy, as like a personal assistant, it gets a first draft of your note written it tees up your prescription refills, it alerts you in a clinical decision support mechanism about those patients that really need your time today, because they haven't picked up their prescriptions or they're late for this appointment or what have you. You'll see better outcomes for both patients and provider satisfaction. The important thing to ask yourself is what can this tool do? And what can it not do? And that should be what we do for any tool in the healthcare space. This is not a new thing we have off label use for a reason. So what are these tools capable of doing? Let's take a scribe because you mentioned it, a scribe can be very good at taking a complicated appointment, organizing it and getting that first draft down on paper. As human beings we appreciate this, right? It's often the first draft of anything, that's the hardest and then you refine it. If our physician then can read that note, clarify some things, but they aren't doing the laboring or writing up a note. That can be really helpful. But the physician is the last pair of eyes for example. On that note, it's the physicians note this is no different than having a note taker in the room, that physician would still be responsible for taking those those notes from a human being and turning them into something for the EMR. So I think in a lot of the documentation realm, the planning the triaging AI - can be really powerful because these aren't tasks necessarily that patients care if they're providers are doing. And patients are already getting AI-enabled care in this way, clinical decision support is a kind of machine learning. It's not generative AI necessarily, but it's machine learning. So I think thinking of that is really important. The second thing is getting patients comfortable with it. I teach a law school class and I assign 'Mission Impossible' to my, to my students, probably not a common component of a law school curriculum. But the last two movies, the last last one hasn't come out yet involves 'The Entity', you know, which is this sort of evil AI thing that controls people, that has a huge impact on public imagination - of AI going inherently rogue, and I don't mean to minimize the risks of AI, we can certainly talk about them. But I also think it's it's worrisome to exaggerate them. And so if you are, you know, a woman over I don't know, 40 or 45, you've had a mammogram every year, you've probably had a mammogram with AI-assisted radiology review, and you didn't think anything about it, and you weren't worried about it. And you weren't concerned about the use of AI in your care. Because you didn't really think of it as artificial intelligence. As we now have the public marinating and all of these stories around AI, what if you had an AI scribe in a hospital room and the patient was discomforted by this like, audio video thing? entity in in the room, recording it and patients decline the use of those tools? And is that really an informed choice? So I think we have to empower folks to have thoughtful conversations with patients, so that their appreciation of the risks and benefits are balanced so that when AI is going to be useful to them and to their providers that can go forward. I don't know that we're doing that yet.

 

Chris Hoyd  32:06  

Okay, so it does sound like you're of the opinion that, at least for now, in the foreseeable future, the value offered by some of those generative AI applications is like, you know, well outweighs the risk, the downside?

 

Jennifer Geetter  32:23  

I think the risks that we're seeing up some of the Gen AI are not necessarily in healthcare. So you know there are real risks, I think we need to find a way to have a conversation that is not distopic on one end, and naive that the days of work are behind us and machines will do it all. The truth is hard, nuanced, and in the middle. And people don't always like to have hard conversations. So we need to find a way to to turn down the volume, I think, on the panic, so we can focus on the actual real risks both inside healthcare and outside and try to prepare for that. And you know, before we were talking about infrastructure, like the non sexy stuff, right, there's also non sexy risks in AI. How do we make sure that good AI tools don't enhance this the already existing disparities we have in health care, folks in cities, folks with higher incomes, get the best of AI-enabled care. Folks that live outside of cities or have struggled financially don't. I mean, there's all sorts of risks that we have to be thinking about, we just can't be distracted from some of the stuff that we see with these very large language models that aren't necessarily being used in healthcare to begin with. No one should be putting a medical question, you know, in their standard GPT. You know, and asking, it's, that's not what it's built for.

 

Chris Hoyd  33:52  

I know that AI governance is something that's important to you, you spend a lot of time thinking and talking to clients about. Tell us your thoughts on AI governance at this point.

 

Jennifer Geetter  34:04  

So in terms of awareness, there's a lot of discussion about whether we should have, you know, an AI agency, an AI at the federal level. For now, we don't have that. And so there are a lot of different federal agencies that within their existing remit, govern AI from some particular angles. So you have, for example, the Office of Civil Rights, which administers the privacy security and breach notification, HIPAA rules. A lot of what we could talk about with respect to AI governance would be about data. So they have their piece you have OMC. You mentioned the FDA, you have the FTC, you have CMS and reimbursement-related activities. There are all these different states as we talked about - all these different stakeholders - and they each see a piece of the puzzle. So I think one important step for any stage company is to avoid, you know, regulatory whack-a-mole, where they solve their FDA problem, at the expense of creating a privacy problem, you have to really try to look at these holistically. That's not a new problem. That's not an AI generated problem. But because AI is so out in front of the regulations, I think it's a particularly acute one. So really understanding the variety of regulators who might care about what you're doing, might have provided guidance, or other types of tools to give you a sense of what they're hoping for may have existing regulations that are going to be grafted on to what you're doing. I think that is important. It's a hard task, and not always an intuitive one. And the US system is different than most of the international system. So I don't I don't minimize how difficult that is for companies early stage or not. But I think it's an important component to longevity. In terms of the liaison function, I think it depends a little bit on the agency. And also, there's congressional interest in this topic. So my sense is that both regulators and members of Congress want to do this, right. They are aware that the technology isn't just going to regulate itself. And they can't be totally hands off. But it's a very complicated subject. I mean, from just an engineering perspective, it's really complicated. And I think there is some, you know, some caution, in rushing in and regulating in a way that will either be way too permissive, or will strangle the industry. And that's a difficult task. So what I think in the meantime is really important is that companies think about their own self governance. Complying with whatever legal components are out there as the minimum. But if you are starting a company, or you're a lawyer, and a company or a compliance officer, your head of product, think of yourself as a public steward. And I don't mean to sound pollyannish, I really mean this, you probably know more about how your product or service works than than anyone else can. Think about how you want it to work in the market, prepare for the above the fold New York Times story, take that seriously, not just as a risk or liability, modification, but because you're proud of what you do. And I think if you ask yourself that question, we we've seen terrific product innovations around that. And I think in the meantime, that's part of our best defenses is companies that take that part seriously. And I think in the end, compliance will be a commercial differentiator. I think public trust is really important, in general, but here because we need the data from all of us, we, we want it to be representative, we want AI to reflect back on the community. So we need the community to feel like they're ready to participate. Otherwise, the data, the training data is going to be skewed as well, the product, and we also want people to feel comfortable and confident using the products and services that make sense for them. And being part of the conversation about where we're not ready to go. And I'm not sure we're quite doing that, yet. Finding ways to involve the public. So I do a lot of research work IRBs, I've had community members for years, you know, do you have community voices, as part of AI governance generally are in the industry? Have you provided training to frontline health care providers who are going to get these AI questions, especially in the growing number of states that have a transparency requirement, meaning a disclosure requirement? So do folks feel empowered to have these conversations? Are these conversations that all providers need to be ready to have? Or do we have AI ambassadors? Have we thought about analogies that can help explain this - any client that hears this will know that most of my analogies come from kids stories and fairy tales - I guess you can't take the mom out of the lawyer - but they help. They help us feel like AI is not totally new. We've done other things that can help us do this well. And and profitably, and in a way that we can be proud of. So all of these are tools that can have the AI reflect back not just on the data, but on our values. And I think this is really important and different companies are going to do it different ways. I've seen some tech companies have a trust page, you know, where they are not just doing their privacy policy, they're really talking about it. Consumers may not go there, but if they want, they can and then I think thinking about AI literacy. For the public for schools, our young people are only going to grow up in a post AI world and do they understand it? And I think you talked about like financial literacy for kids. I think AI literacy is really important.

 

Chris Hoyd  40:15  

Thanks so much for joining us. You can also connect with us on LinkedIn, YouTube or on our website at productinhealthtech.com. If you have ideas or suggestions on what you'd like to hear in a future episode, or if you'd like to be a guest, please shoot us an email at info@productinhealthtech.com