Categories
Business Compliance Healthcare HIPAA Privacy Regulations

Slightly Refined Tracking Tech Guidance

The ongoing saga of how to use tracking technology in healthcare without causing problems under HIPAA got a new chapter on March 18, 2024. The new chapter is the result of the Office for Civil Rights updating its guidance on the “Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates.”

The updates offer more examples around the supposedly clear-cut statements from OCR about tracking technology use. Arguably some wholesale changes may have been more beneficial, but the tweaks do offer some helpful clarifications. Since OCR did not clearly call out where it placed the changes, it is instructive to reference the guidance in its original state, which is thankfully possible with all of the internet archiving services.

For some additional thoughts about the original guidance and the frustration that it engendered, check out the following earlier posts: Tracking Tools and Privacy Gaps and Shading the Gray for Tracking.

The First Set of Changes

The first set of changes is found in the “Tracking on unauthenticated webapges” section of OCR’s guidance. New examples are offered up to aid interpretation of the guidance.

It appears that OCR intends for the examples to provide somewhat clearer dividing lines between information that constitutes PHI or not. The first example focuses on a webpage visitor seeming to look for information about a hospital, such as job postings or visiting hours. In those instances, OCR notes that information collected by tracking technology is unlikely to be PHI even if it is clearly identifiable. OCR states that information collected from these visits does not relate to an individual’s past, present, or future health, healthcare, or payment for healthcare, all of which are prerequisites for the information to become PHI.

The second example is a student looking at the scope of oncology services in connection with writing a term paper. OCR notes that even though tracking technology is collected on pages relating to potential healthcare services, the research does not relate to a situation that meets the definition of PHI.

The third example is an individual visiting a hospital’s oncology services webpage to seek a second opinion or treatment options. In that instance, OCR feels that information collected by tracking technology would constitute PHI because it relates to seeking treatment.

The final example is an expansion from a point made in the original guidance that relates to scheduling an appointment or using an online symptom checking tool, both on an unauthenticated webpage. Scheduling an appointment would clearly relate to healthcare services since the individual has booked time with a clinician. The online symptom checker is a little looser, but can be seen as seeking information about a current health condition.

The new examples concerning the unauthenticated pages appear helpful at first glance. The first example responds to some of the criticism that not every visit to a healthcare facility’s webpage relates to an individual’s health. Calling out areas of the webpage where it would be a stretch to say that the webpage visitor is revealing something about their health is helpful.

However, the other examples introduce intent into the analysis. Pulling out statements from the examples, there is a student writing a term paper contrasted with a different person looking at the same information for a second opinion. How can the operator of a website know why some unknown individual is looking at a particular webpage? Is there a form or other indicator where the visitor can check off why they are on the page? Will the interaction with the page somehow reveal the reason for visiting the webpage? The answer to those questions is very likely no, which is why the guidance will now likely raise new questions. Instead of offering clarity, the guidance could arguably support one of two competing arguments, one of which is all visits unless clearly indicated otherwise are innocuous and not subject to HIPAA or all visits will be treated as creating PHI that requires compliance with HIPAA.

Enforcement Priorities

The final major addition is the identification of OCR’s enforcement priorities. OCR noted that its investigations and enforcement will focus compliance with the HIPAA Security Rule. Specifically, OCR wants to make sure that entities are properly assessing and mitigating risks to PHI when tracking technologies are being used. OCR gives the caveat that each investigation is driven by the particular facts and it will review technical information in its review. However, the clear between the lines implication is that some enforcement is coming and the industry should be ready for a headline.

The trouble with knowing where enforcement will focus is how the complaint could come in. Given the noted deficiencies with the new examples, will a user who only internally knows the reason for their visit to a hospital’s webpage submit a complaint when tracking technology captures their information after seeking information about a condition? Will OCR take that individual’s word at face value when it is completely unknowable to the entity that uses the tracking technology? That scenario could easily create the setup for a public fight that does not benefit anyone.

Conclusion

Given the position laid out by OCR, expect renewed calls for clarification or modification to the guidance from the healthcare industry. The examples while seemingly helpful only create a strong likelihood of more complications. Those complications could result in public disputes or some organizations trying to take advantage of perceived loopholes. Regardless of the perspective, probably the only certainty is that the discussion around tracking technology is still far from settled.

Categories
Business Compliance Healthcare Privacy Regulations

Not Ready for Primetime

Generative AI and large language models (LLM) continue to garner a lot of press, attention, and investment in healthcare. The promise is that such tools will free up a lot of time by offloading some tasks or potentially filling roles that remain empty at this point in time. However, can the accuracy of the tools be trusted? How will the tools be trained and on what data? Those are all valid considerations that must be appropriately addressed before widespread or in-depth use can really occur.

The Accuracy Issue

A recent study compared ChatGPT with Google for questions and searches relating to dementia or other cognitive decline concerns. The objective of the study was to compare results for the dementia related questions posed to each tool. The questions were a combination of informational and service delivery. The responses were then evaluated by domain experts based upon the following criteria: (i) currency of information, (ii) reliability of the information sources, (iii) objectivity, (iv) relevance to the actual question that was posed, and (v) similarity of the response between the two tools.

After evaluating the results the researchers found some positives for both options. Google was determined to provide more currency and reliable responses whereas ChatGPT was assessed as more objective. A bigger differential was found in response relevance with ChatGPT performing better. Readability was assessed poorly for both tools as the average grade level of the responses was in the high school range. Similarity of content for the tools varied widely with most responses being rated as medium or low for similarity.

The researchers concluded that both Google and ChatGPT have strengths and weaknesses. Some of the biggest issues for ChatGPT are ones commonly identified in coverage of what its capabilities are. Specifically, the biggest weakness is not providing a source for the information that it presented, which means it can be difficult to sort analyze the accuracy of a response even when it may seem to be of high quality and very useful. For Google, the relevancy of the responses could be improved, though arguably providing referrals or references to helpful resources could be better.

The research is helpful for understanding the current shortcomings of the tools and could provide some insight into how improvement can occur. A very important factor to keep in mind is that neither Google nor ChatGPT are healthcare specific tools. Both are designed for broad, generalized use and not trained for the nuances of healthcare or the healthcare industry. Could better training make a difference? The answer is likely yes, but that leads into the next issue.

How to Train for Healthcare

If tools like Google and ChatGPT are not healthcare specific, how can they be made healthcare specific? Specialized training is one of the clearer answers. But that also brings its own question of what healthcare specific data will be used for that training.

One aspect of the training would be feeding generalized medical information into the tools from publicly available information sources. The sources would likely include government documents, journal articles, scientific papers, and other evidence based, verifiably accurate sources that actual clinicians would rely upon. One further issue on that front would be how to “correct” the training as evidence and knowledge evolve. Even humans are not necessarily the best at immediately acting upon or internalizing new data and breaking from old habits. Would the same biases or limitations be inherent in a generative AI or LLM tool? With no technical background, that query admittedly cannot be addressed in this discussion, but hopefully others can enter the discussion and provide a more nuanced and informed understanding.

Another aspect of the training is a bit more complicated. The tougher area is how to train tools to understand the nuances and idiosyncrasies of patient and clinician communication, documentation, and related interactive components of healthcare that come naturally to individuals. What information sources can be used to train technology on that front? A relatively common answer has been electronic medical records and other troves of data being created through digital interactions between patients and clinicians.

While requests for that data are usually occurring on a de-identified basis, it does raise the perpetual issue of whether combining so much data can mean that it will actually remain de-identified. Terms for acquisition of the data may also seek to keep it indefinitely and with no ability to return or delete. Before an entity shares data in that scenario, it should be very clear on the conditions it attached to collection of the data as well as what potential uses were identified for use of the data. If care is not taken, an entity could very easily create a very big headache for itself.

The other aspect of sharing so much patient data, even if permissible under law and contract, is the impact on the individuals whose information is being shared. The discourse around privacy and data sharing over recent years has focused on individuals gaining more control over their data, being more clearly informed of potential uses, or being allowed to participate in the benefits received from use of the data. It is likely that none of those scenarios would play out in sharing data for purposes of training a generative AI of LLM tool.

Should that happen? Arguably it is more of an ethical dilemma than a legal one (at least assuming all of the legal and regulatory checkboxes have been ticked). There is no easy or clear answer, but it should be brought to the fore before too much data exchange hands and get loosed into the wild.

Looking Ahead

Training, development, release, and use of generative AI and LLM tools will not stop. Given that reality, it is essential to establish ore robust parameters guiding those efforts and what will happen with data. Absent a considerate approach, backlash can be expected, which could undermine valuable tools and solutions.

Categories
Business Compliance Health IT Healthcare HIPAA Privacy

Tread Carefully with New Free Technology

When a new form of technology hits the market, many people will rush to use it in an attempt to tap into the perceived new worlds being opened. Healthcare has experienced that rush many times, especially in the current age of technology explosion. However, rushing into the use of new technology does not come without risk. The high stakes of healthcare, whether considering patient impact or regulatory compliance, call for assessing how to most appropriately use the technology before rolling it out.

The New Kid on the Block

An artificial intelligence based system, ChatGPT, has garnered the greatest amount of attention in the past few months on the technology front. ChatGPT produces content that can be easy to understand and very closely mimics what an actual individual could produce. Further, ChatGPT functions by responding to conversational language, which means no knowledge of coding is required or even really any deep level of technical skill.

The model works by processing a vast amount of existing content that informs what ChatGPT will produce. An interesting, high level overview (this is said somewhat tongue in cheek as the discussion still gets fairly technical) of what ChatGPT is by Stephen Wolfram breaks it down as the system going word by word and determining what is the word that best fits next in the line based upon what is already there and then continuing to iterate and utilize different odels. Arguably individuals do not write in that manner, though there is somewhat of a curious question posed by that approach. Is it something that our brains do automatically by intuitively building sentences and overall works by subconsciously processing what is already there? It is a potentially endless thought loop and one that can be distracting while actively trying to write.

As ChatGPT is going through all of the seemingly random analysis, it is likely using a temperature parameter to assess what should or could come next as it builds the final content. That is part of the reason why the same input will generate different responses without ChatGPT merely reproducing the same thing time after time.

Leaving the technical details to the side, mostly because those details are admittedly over my head, the results from ChatGPT are stunning. The content produced could, for the most part, fool a reader into believing that it was created by a human. Further, the content fits into almost every field and every scenario.

The flexibility of the content being created is where the imagination and possible danger lies. Users could try to produce content in situations where information is shared inappropriately or utilized to cut corners.

Enter Healthcare

In light of the easy-to-produce, ready-to-go content from ChatGPT, it was only a matter of time before use cases in healthcare were identified. A couple of the quickest uses were to create prior authorization requests and appeal letters that were relatively convincing. The content being created gained a lot of social media attention and interest in experimenting with the possibilities.

But wait, is the content accurate? Can patient information be entered? How do the creators of ChatGPT feel about delving into healthcare? Those are only a couple of questions with a whole host waiting in the wings behind them.

Privacy and ChatGPT

Before rushing into flooding ChatGPT with information, healthcare users should know that the operator of ChatGPT acknowledges that personal data can be processed through ChatGPT, in which case it would be necessary to execute appropriate agreements. Executing agreements means entering into an arrangement with ChatGPT, which would go beyond any potential free use of the tool. A review of the terms of use only finds reference to GDPR and the California Consumer Privacy Act, not HIPAA. That implies the operator of ChatGPT does not have measures in place to ensure the protection of data as required in the healthcare setting.

If HIPAA is not respected, then patient information should not be entered into ChatGPT. That means healthcare users cannot create personalized content because the entry of any patient specific information would run afoul of the privacy requirements imposed by HIPAA.

Enter tools purporting to layer HIPAA compliance into use of ChatGPT. One such tool was announced by Doximity with an assertion that the content is housed within a HIPAA-compliant space of Doximity. It is possible that the assertion is true, which could occur if data are only entered in the secure space operated by Doximity. That could mean basic prompts that pull in ChatGPT created content that is editable only within the Doximity platform. That scenario could be a two step process, which would call for restricting the chance to input patient data that would flow into ChatGPT.

Even if a query to ChatGPT can be protected in a way that meets HIPAA’s requirements, use should still be done carefully. Individuals working for an organization most likely could not enter into an agreement by themselves that binds the larger organization. Putting it a bit more plainly, an employed physician in a large group could not sign up for a service as an individual and put patient information in the service. Why not? Because in most scenarios the individual physician does not have the authority to create a legal obligation on behalf of the employer and, while employed, the patient information is subject to the employer’s compliance with HIPAA. As always, it is necessary to consider all of the layers of compliance.

Given the sensitivity of healthcare information, being very clear on the privacy ramifications is essential. Giving away information without appropriate protections is a recipe for future problems.

Accuracy of Information

Another potential complication for healthcare is ensuring that the ChatGPT produced content is actually accurate. Possibly spreading misinformation because a response is presented with confidence by the tool is problematic. An appropriate professional should carefully review any content that is generated through ChatGPT because simple wording changes can have a big impact.

Paying attention to the details is very important since one proposed use of ChaptGPT is to create arguably easier to understand patient instructions, discharge papers, or other patient facing materials. If those materials lead a patient down the wrong path, liability will quickly follow. Assuming that any tool, but especially new ones that are still being proven out, can be fully trusted will lead to trouble.

Promise Ahead

Despite the caution on running into use of a tool like ChatGPT, it is not an argument to avoid such use. Instead, the creation of the tools and ongoing refinement should be seen as creating promise. It is impossible to fully know what the future will hold, but it will clearly be exciting and filled with the unexpected.