
If you work in MedTech, life sciences, or digital health, you have probably noticed how much QA and regulatory work has changed in the last few years. More software driven solutions. More documentation. More pressure around validation, traceability, and audit readiness. And not much extra time to deal with it.
McKinsey research found that by 2024, around 72% of organizations were already using AI in at least one business function. In healthcare, more than 70 percent of organizations are either using or actively rolling out generative AI tools in some form.
At Estenda, we have spent over 2 decades working alongside MedTech, life sciences, and digital health companies, helping design, build, and scale regulated software solutions. Over that time, one thing has become clear. AI can take a lot of pressure off teams by reducing time spent on repetitive work, but they only deliver real value when people stay in control of the process and the decisions.
That perspective is reinforced by our COO and Co-Founder, Richard Kedziora, who recently joined the Medical Device Made Easy Podcast. His experience reflects what many teams are now seeing firsthand as these tools move from early experimentation into day to day use in regulated environments.

Top 3 Benefits of AI in QA/RA Compliance
Improved productivity and efficiency
AI improves how quickly your team can get through documentation. That is the most immediate impact.
Across MedTech, life sciences, and digital health, QA and RA teams deal with constant documentation. Design inputs, validation plans, risk files, clinical summaries, software requirements. The structure is familiar. The effort is repetitive.
As RJ puts it during the podcast, “It can create documents for you. So, yeah, it’s a huge boon for creativity, productivity, efficiency.” And that point lands quickly when you start using it. You are no longer building everything from scratch.
Instead, you begin with something usable. Then you refine.
He continues, “It’s really about freeing people up to then do higher value tasks.” That is the shift. Less time formatting and structuring. More time reviewing, validating, and making decisions that actually matter for compliance.
You can see this across different environments.
RJ also points out how fast this is becoming expected behavior. “We’re very quickly moving into it becoming the norm, like that you need to use this now to improve your productivity and efficiency.”
More: How AI Optimizes Data and Enhances the Patient-Physician Partnership
Faster brainstorming and first drafts
Starting from zero is slow. AI helps remove that friction.
When you are writing requirements, building validation strategies, or drafting test cases, the first version takes the most effort. That blank page slows everything down.
AI gives you a starting point.
RJ says, “When you are using a large language model, you want to break things down into individual tasks and have it focus on a specific thing.”
This is where teams either get value or get frustrated. If you try to do everything in one prompt, results are messy. If you guide it step by step, it becomes useful.
You might use it to draft generic user needs, outline acceptance criteria, or generate initial test cases. These are common tasks across MedTech, life sciences, and digital health.
As RJ says, “You’re generating requirements, you’re reviewing documents, creating test cases, test data. These are all things that the AI can do for you to help you be more efficient.”
But there is an important balance here.
RJ is clear about it. “Is it going to be as accurate as a human? No.” And that matters in regulated work. You cannot rely on it blindly.
At the same time, he adds, “You were able to generate this data very quickly, very efficiently. And it’s that 80 percent.”
That first 80 percent is where the speed comes from.
Then the responsibility shifts back to your team. “Then you want to make sure that you take your expertise to get it to that last 20 percent.”
That final 20 percent is where compliance lives. It is where context, risk, and regulatory alignment come in. That part does not go away.
The strongest teams use AI to move faster at the start, not to skip the thinking at the end.
Better support for training and understanding
Training is often treated as a checkbox. But in regulated environments, it directly impacts quality.
The challenge is that traditional training methods are not always effective. Long documents and static materials do not always lead to real understanding or retention.
This is another area where AI fits naturally. From there, the use cases expand quickly. “Instead of just reading documents… we can create quizzes with those documents… or create different modalities of that information,” RJ explains.
RJ shares how teams are approaching this. “Internally, we started in low risk areas and you see this a lot. How can we use this in low-risk areas, particularly training?”
A single SOP can turn into a summary, a quiz, a walkthrough, or even scenario-based learning. That variety helps different types of learners.
“So now it’s easier for us to create multiple modalities to help people learn and understand how to use those documents,” RJ adds.
This applies across all sectors.
In life sciences, it supports GxP training.
In MedTech, it reinforces design control processes.
In digital health, it helps teams understand system workflows and validation expectations.
There is also an interactive element.
RJ mentions that as systems learn your processes, “you can ask your LLM questions to better understand and figure out how to do things.”
That back and forth helps teams think through tasks instead of just memorizing steps.
Top 3 Risks of AI in QA/RA Compliance
Hallucinations and inaccurate outputs
AI can get things wrong. And it does not tell you when it does.
RJ puts it, “The bad and potentially ugly side of it is it does make up stuff. It’s not very good at saying, I don’t know.”
This is one of the biggest risks in QA and RA.
If incorrect information gets into requirements, validation plans, or regulatory documentation, it can create problems that surface much later.
RJ offers a useful way to think about it. “Think of it like an intelligent intern… incredibly intelligent… but it might make stuff up.”
That is exactly how you should treat it. Helpful, fast, but not independent.
Because of that, he emphasizes, “You do have to review everything that it is producing.”
Every output needs review. No exceptions.
There is another layer to the risk.
RJ points out, “It sounds very authoritative.” That is what makes it tricky. The output looks polished. It feels right.
But that confidence can be misleading.
“So you have to be cautious of not lowering your guard,” he says.
In regulated environments, nothing replaces verification.
Overreliance on AI instead of expert judgment
AI is a support tool. It should not take over decision making.
RJ is very direct about this. “You still need to be in control of your projects.”
That control stays with your team.
He frames AI in a way that resonates with most teams. “It’s a partner, a companion. It’s there to support you and help you.”
That is the right mindset going in.
He comes back to the intern analogy again. “You need to give it the instructions. And the better you can do at those instructions, the better the output that you’re going to get.”
That means your expertise still drives the process.
Even with good outputs, the expectation does not change. “It’s good for brainstorming, producing documents, summarizing information, but double check everything,” RJ says.
Standards stay the same. Review stays the same. Accountability stays the same.
Data privacy, bias, and false reassurance
AI introduces new concerns around data and trust.
RJ raises a clear warning. “I would not put in proprietary information into a general ChatGPT because you’re not entirely sure what’s going to happen.”
That is especially relevant for MedTech, life sciences, and digital health teams working with sensitive data and intellectual property.
You need to control how AI is used. RJ suggests a more secure approach. “You can spin up your own large language model to ensure yourself and make sure that it is protected.”
He also points out a behavior that many teams notice. “It does have a tendency to agree with you.” That sounds helpful, but it is not.
“It wants to make you happy… I’m going to tell you what I think you want to hear,” RJ says. That can create false confidence. And in QA and RA, confidence without verification is risky.
Need Help Navigating AI in Quality and Regulatory Compliance? Reach Out to Estenda Solutions
At Estenda, we partner with MedTech, life sciences, and digital health organizations, with a strong focus on digital health solutions. Our work spans strategy, custom software development, AI and machine learning, and advanced data analytics. We also stay involved through implementation support to ensure these solutions work in real environments.
With over 2 decades of experience and a strong track record in healthcare technology, we bring both technical depth and regulatory understanding.
If you are looking to apply AI in your QA and RA processes in a way that is both practical and compliant with human accountability, contact us at info@estenda.com.
Tags
AI/ML Solutions




