Monday, 5 November 2018

New ultrasound technique shows great promise in detecting breast cancer

Source: https://www.news-medical.net/news/20181030/New-ultrasound-technique-shows-great-promise-in-detecting-breast-cancer.aspx

A new ultrasound technique can help distinguish benign breast tumors from malignant ones. The technology was developed with support from the Swiss National Science Foundation.
Ultrasound is one of the three main technologies used in medical imaging. It is more compact and affordable than nuclear magnetic resonance imaging (MRI) techniques, and safer than x-rays. But the images it produces are often difficult to interpret.
With support from the SNSF, a team from ETH Zurich has developed a new method based on the speed of sound. In initial clinical trials, the team's prototype showed great promise in detecting breast cancer. The researchers have published their work in the journal Physics in Medicine and Biology.

Measuring the speed of sound, not the quantity

An ultrasound probe emits sound waves that penetrate the body. Because organs and tissues have different physical properties, they reflect the waves differently. The device analyses these "echoes" and reconstructs a three-dimensional image of the inside of the body, called an "echograph" or, more commonly, ultrasound.
Usually, the device measures the intensity of the reflected sound waves. But the Zurich team takes an additional parameter into account, namely the echo duration. This new method produces images with enhanced contrast, which could prove useful for cancer diagnosis: It not only detects the presence of tumors; it also aids in distinguishing benign tumors from malignant ones.
This innovation relies on a simple principle: the density and rigidity of the tissues determine the speed of the sound echo. Because tumors are more rigid than the surrounding tissue, especially when they are cancerous. As a result, sound travels 3% faster on average in malignant tissues than in healthy tissues, and also 1.5% faster than in benign tumors.
A simple change of software
During clinical trials, the Zurich team demonstrated the effectiveness of their prototype in detecting breast tumors. "Our goal is to provide physicians with a better tool for decision-making during routine checks, and to avoid unnecessary biopsies," says Orçun Göksel, assistant professor at ETH Zurich and director of the study. "Compared with conventional ultrasound, our images are much easier to interpret."

"Ultrasound is successful because it is safe, portable and inexpensive," says Göksel. "Any physician's office can accommodate a compact, handheld probe. Our technology preserves these advantages while addressing the main limitation of conventional ultrasound - image quality - which is still a problem for diagnosis in many clinical cases."The technique can be used with any equipment, because the key innovation is the processing software. A device that exploits the speed of sound recently entered the market, but it requires a cumbersome and expensive infrastructure - the part of the body being observed needs to be submerged in degassed water.
The team is continuing clinical trials - particularly in the area of liver disease and certain muscular disorders due to aging that often lead to stiffening of tissues. Their patent-pending technique requires only minor adaptations to current devices. "As a result," says Göksel, "it could find rapid commercialization. Thanks to a grant from Innosuisse, we are currently developing a system that will work at the push of a button and hopefully be used by hospitals every day."

Wednesday, 31 October 2018

3 Types of Artificial Intelligence (AI) to Impact Clinical Research in the Next 3 Years

3 Types of Artificial Intelligence (AI) to Impact Clinical Research in the Next 3 Years

Generally, research studies recommend looking at AI through the lens of business capabilities rather than technologies. There are three types of AI being deployed by life science businesses that can provide a framework for improving efficiencies. Very broadly, AI can support 1) automation of business processes, 2) gaining insight into data through data analysis, and 3) making critical decisions based on the large volumes of data and thus impacting overall business through the big data analysis and engagement with the customers, patients, suppliers and employees.
  1. Process Automation / Focus on Efficiencies

This type of AI relates to automation of what we can broadly call ‘back-office’ function and it is deployed through more efficient handling of digital and physical tasks using technology.
Such tasks may include
  • Transferring the data from a site to a storage or transferring the data from a call or email into a record storing system.
  • Reconciling failures in systems or data by combining and automatically checking information in multiple systems and multiple document types.
  • Reading radiology or clinical reports through Natural Language Processing techniques to extract provisions, data points, and ultimately making conclusions.
This type of AI is the simplest and least expensive to implement. In the clinical research industry, data transfer, data quality control, and data management processes feel somewhat archaic.  It inherits a lot of inefficiencies founded in utilization of manual labour, outdated technologies and hardware. The efficiencies and quality of the outcomes can easily be improved through the use of the Cloud Enterprise solutions.
Cloud solutions, such as IAG’s DYNAMIKA , allow companies to create an efficient infrastructure between the suppliers of data (sites or hospitals), customers (biotech or pharma companies) and data analysis function of the business.
The infrastructure allows oversight of the performance of each supplier, ensures full transparency of the overall delivery, assists each stakeholder in communicating their needs, and requests but most importantly allows the customer to make fast decisions based on real data.
This type of AI is what we would call a ‘’low hanging fruit’’ that can bring immediate returns.
It is also the least ‘’smart’’ in a sense that these applications are not programmed to learn and improve, though our developers at IAG are thinking through how to add more intelligence and learning capabilities into DYNAMIKA.
Examples of such intelligence would include automating a link between a radiology reader looking at CT or MRI and measuring a tumors size and the scoring system called RECIST or a radiologist looking at MRI of a rheumatoid arthritis patient hand and RAMRIS form. Other companies are coming up with a way to extract clinical information from the radiology or pathology report and input it directly into the trial database.
GE for instance used AI to integrate supplier data and saved $80m in its first year by eliminating redundancies in negotiating contracts, which were previously managed at the business unit level. Similarly, one of the largest banks used AI to match the contracts from different suppliers, identifying tens of thousands of products and services which were not supplied.
  1. AI Driven Data Analysis / Focus on Precision of Go / No Go Decision

This type of AI is often feared by specialists and referred to as better decision making or replacing a human in the making a decision process. Strictly speaking, it is not true. The AI driven methodologies in clinical research are generally designed to aid the human to make a decision. We can broadly separate AI methods into semi or fully automated.
An example of fully automated AI driven method would be if someone takes an MRI scan and when the scan gets processes by the software we know if the patient has a particular disease or not.
A majority of the methods are semi-automated and will require either the initial input or the final decision made by an expert, however the AI algorithm will handle everything in between in a fully automated manner.
We use semi and fully automated methodologies for the assessment of medical images such as MRI, CT, Xray, US. We use these alongside with the human assessor who would assess pre-processed imaging data instead of the raw data. The decision made with the use of AI will be made faster and be potentially more accurate. We will also be able to look deeper into the data, extracting valuable information which might not be noticed by a human eye from the raw images.
At the very beginning of clinical research, all data was analysed by experts and such ways are currently referred to as the ‘state of the art’.  The industry has moved to bring automation in pathology, assay analysis and of course image analysis, which perhaps is the most visual of them all.
It is important to note that an expert and the machine will most likely make the same decision. This implies that we are speaking about the actual expert and a well-designed robust algorithm.
The advantage in using AI driven techniques is that it would allow to recognize much finer details or pick up more subtle differences, which in turn will allow achieving statistical significance on smaller patient populations or making more robust decision (statistically speaking), which often makes all the difference.
We wrote several case studies of our companies using the advanced imaging and machine learning in several therapeutic areas. These are being constantly updated and worked on, but some examples can be found here
This type of AI is easy to implement in a fragmented manner or alongside of something which might look more familiar, such as a state-of-the-art method. The  use of such AI in decision making in phase III studies as a methodology for extraction of biomarkers will require  consensus of regulatory bodies and several groups are working towards making this possible. However, in early phase studies where it is the company’s decision and such decision needs to be based on the comprehensive and robust data, this type of AI will bring high ROI by ensuring less failures in the later phase trials.
  1. AI and Insight into the Data /Focus on Business Decisions

This type of AI is used to detect patterns in vast volumes of data and interpret their meaning.  
Here you will find a blend of machine learning techniques and true artificial intelligence. Examples of AI would be the prediction of a particular cancer type based on a combination of genes, automation of personalized targeting for success of a particular therapy, choosing the right dosing based on the treated patient statistics and much broader.
In related markets, this type of AI will inform the insurer with more accurate and detailed actuarial modelling, inform drug discoverer of most likely combinations to succeed and fail, intelligently influence the design of your next trial based on the competitive intelligence, and utilize endpoints and patient populations.
An example of big data analytics would be a project where a www.clinicaltrials.gov and all relevant publications got assessed by a machine to collect information on a particular treatment to inform which sub-populations would most likely benefit from it.
AI driven insights are very different from the traditional analytics as they are much more data intensive and detailed and they utilized a particular model which gets better with more data and time.  With the data obtained in any single interaction, the model can improve on its performance generating more reliable results.
Such AI is typically used to improve performance or decisions only a machine can do.
Other examples of such AI type would include
  • Intelligent agents that offer 24/7 customer service addressing issues like passport requests. Technical support to refereeing a customer to the right specialist
  • Health treatment recommendations systems that help providers create customized care plans that take into account individual patient history, previous treatments and patient analytics.
  • A biotechnology or pharma company combining all their early stage data to determine the next indication where a drug is more likely to succeed.
This type of AI is the most expensive to implement as it would require the first 2 types to be in place for added efficiencies. This definitely is not a low hanging fruit and will require consensus and readiness to change from all levels of the organization.
It is the author’s personal view that it is very actual and critical for smaller biotechnology companies with platform technologies who are at the early stages of the development or just got their first success. Capitalizing on this success with the use of the best technologies will yield long term benefits and sustainable growth.
To conclude, AI, in either form provides a powerful mechanism of improving the business performance and decision power.  However, before embarking on any AI initiative, the companies must understand which technologies perform what type of tasks and the strength and limitations of each. In order not to waste valuable time and resources, it is important to set the right expectations from the very start. For instance, rule based systems can make decisions but are not going to learn; machine learning driven analysis can outperform a human when given the right data but might fail if the variation is too significant. The black box technology company should present sound validation and certification of quality of the outcomes and so on. If you don’t have data science capabilities in house, you need to build an ecosystem to enable seamless data flow to ensure success of the entire operation.
If you read till the end and would like to join likeminded individuals, please join our Linkedin group ‘AI and Machine Learning in Clinical Research’ or contact me for more information (olga.kubassova@ia-grp.com).

Beyond Imaging: the paradox of AI and medical imaging innovation

AI will change the interaction between doctors and patients. But most patients won’t even know it’s there. That’s because improving the patient experience, provider productivity, diagnostic accuracy and overall quality of care won’t happen overnight or as part of some massive disruption. The best artificial intelligence (AI) will evolve invisibly with and into the existing care continuum – embedded into workflows, applications and devices already in use today.
Today, hospitals store hundreds of millions of digital images, their numbers growing as imaging scanners such as MRIs and CTs become better at capturing thinner and thinner slices of the body – and 3D and 4D images become the norm. There is simply no way humans can turn that much data into useful information.
Hospitals are producing 50 petabytes of data per year[1]. A staggering 90 percent of all healthcare data comes from medical imaging. It’s a lot of information, and more than 97 percent of it goes unanalyzed or unused.
That’s where AI comes in.
AI is vital to tackling this “deluge of data” challenge in healthcare – and medical imaging is a logical place for AI to prove its worth. To do so, man and machine must work together, and radiologists need to appreciate that their roles will transform. By embracing the machine as an integral part of the care team, enabling it to automate routine procedures and processes, clinicians can focus on the most complex and critically ill patients and more efficiently and effectively diagnose and treat disease.
AI-powered medical imaging systems can produce scans that help radiologists identify patterns – and help them treat patients with emergent or serious conditions more quickly. The goal: more accurate, quality care.
Healthcare + Digital
Healthcare providers, equipment manufacturers and imaging technology vendors all have an important role to play in the adoption and evolution of AI in healthcare. For providers specifically, it is key they ensure the tools address the most impactful patient cases or workflow inefficiencies and that they help validate the algorithms in a clinical environment so seamless integration is possible.
Modernizing enterprise imaging can help to improve the overall flow of information in today’s complex healthcare networks. It can also help to establish the consolidated data foundation needed for the efficient use of emerging AI solutions and other data analytics tools.
That’s why GE Healthcare and Intel have partnered to design the next generation of imaging technology.
At a first-of-its-kind digital development lab for healthcare imaging technology, Intel and GE Healthcare coders are developing, testing and validating new innovations across medical imaging hardware, software, cloud and edge technology using the new Intel Xeon Scalable platform. The goal is to create solutions that will offer greater hospital efficiency through increased asset performance, reduced patient risk and dosage exposure – with faster image processing – and expedited time to diagnosis and treatment. Specifically, Intel and GE aim to enable radiologists to read images more quickly and to lower the total cost of ownership for imaging devices by up to 25 percent.
Clinicians + Code
In addition to looking at ways to best leverage the big data information from medical images, GE Healthcare is also turning to small data – a data source accessible on a local or department level – to gain valuable, real-time insights into employee practices and patient care to address workflow pain points.
Healthcare’s oldest form of medical imaging, X-ray scans, account for three-fifths of all medical imaging,[3] but X-ray “reject rates,” the number of images that cannot be used due to poor image quality or patient positioning, can approach 25 percent. [4] Reducing these reject rates could save time and resources, improving the patient experience.
That’s why GE Healthcare developed an X-ray analytics application that helps clinicians automate their data collection to understand the root causes of rejected images. Piloted at the University of Washington, the application has saved their radiology department measurable time and resources (up to 230 mouse clicks and seven hours of time) and has provided technologists with readily available feedback to help them work more efficiently.[5]
This work will eventually help radiologists and technologists get the right image on the first try – improving department productivity and freeing up more time for clinical interpretation and patient interaction. Leveraging small data for specific purposes can produce big wins that move organizations along in affecting change.

The Beyond Imaging Future
AI is not a “one size fits all” solution. It’s about going beyond the imaging machine to make it intelligent with software and analytics. AI partnerships and advanced analytics teams have made significant progress over the past few years, but the work and outcomes are just getting started.
These AI-enabled tools will quickly feel invisible, making way for a more personal doctor-to-patient experience. AI can empower clinicians to do their best work by freeing up their attention to do what only a human can.
To learn more about how GE Healthcare is working to create intelligent devices, register for the “beyond imaging” workshop in a U.S. city near you this Fall.

AI in Medical Imaging

AI in Medical Imaging


AI is being used or trialled for a range of healthcare and research purposes, including detection of disease, management of chronic conditions, delivery of health services, and drug discovery.

AI has the potential to help address important health challenges, but might be limited by the quality of available health data, and by the inability of AI to display some human characteristics.

The use of AI raises ethical issues, including: the potential for AI to make erroneous decisions; the question of who is responsible when AI is used to support decision-making; difficulties in validating the outputs of AI systems; inherent biases in the data used to train AI systems; ensuring the protection of potentially sensitive data; securing public trust in the development and use of AI technologies; effects on people’s sense of dignity and social isolation in care situations; effects on the roles and skill-requirements of healthcare professionals; and the potential for AI to be used for malicious purposes.

A key challenge will be ensuring that AI is developed and used in a way that is transparent and compatible with the public interest, whilst stimulating and driving innovation in the sector.

Wednesday, 1 March 2017

Clinical Trial Innovation: What’s to Come in 2017

A lot has happened in the clinical trials industry in 2016, as biopharmaceutical enterprises are just starting to delve into clinical trial innovation. RBM is starting to expand into new concepts in Quality Risk Management; enterprises have aggregated their data to generate clinical trial predictive models; the definition of patient centricity is cementing; incorporating endpoint adjudication methodologies in trial design is talked about often; subject enrollment is increasing in efficiency and scalability through big data, and mHealth pilots are starting to demonstrate promising data.


RBM evolves into Quality Risk Management
The topic of RBM has been spoken about and implemented in many forms since FDA released its draft guidance document in 2011, however, as companies scale RBM, clinical quality departments are starting to voice how risk management should be executed.
While a data assessment has demonstrated that the Risk Assessment Categorization Tool (RACT) introduces subjectivity in risk analysis, TransCelerate launched several initiatives on quality risk management including the Site Qualification and Training (SQT), and Quality Management Systems (QMS) initiatives, and Abbvie’s Susan Callery-D’Amico elaborated on her perspective regarding the QMS initiative, suggesting that proper QMS frameworks must be defined and categorized continually throughout a study, risk needs to be triaged with resources, and expanded on the importance of using technology and analytics to continually measure and optimize study quality performance.
Boston Scientific’s Celeste Gonzalez went further by discussing how quality management diffuses into vendor oversight and performance, and how departments should work more collaboratively in order to develop comprehensive vendor oversight plans and analytics.
RBM is moving away from an overarching concept that defines risk management, and into a subcategory or activity under the quality risk management umbrella.
Data is aggregating, enabling predictive modeling
The biopharmaceutical industry is at a point where it is capable of aggregating data from numerous clinical systems. As a result, the industry has launched internal data sciences functions to analyze data for clinical operations. From a quality standpoint, Pfizer is leveraging aggregated data sets to generate predictive models used for foreseeing factors impacting GCP and study quality risk during study design. From an operational perspective, Clinical SCORE has generated enough data in its normative database with study sites to predict the impact of site issues with software on CRA relationships. On the toxicology front, BioCelerate was formed in order to promote toxicology data sharing, facilitate the discovery of new molecules, and enhance go/no-go decisions in clinical trials.
Endpoint adjudication is gaining importance
Many in the industry are starting to realize the importance of reducing data variability and enhancing data quality. Accordingly, involving independent endpoint adjudication committees—which is aimed at improving the quality of clinical decisions made by investigators—is gaining importance. However, while a survey on endpoint adjudication has shown that many biopharmaceutical development professionals recognize the importance and efficiencies of using eAdjudication technology, respondents in departments critical to operationalizing studies, such as clinical operations, are not even aware that such solutions exist. Moreover, no official guidance exists on endpoint adjudication, leaving the industry feeling a bit ambivalent.
Subject enrollment gains ground with big data
Tom Krohn from Antidote recently elaborated on how recruitment technologies are changing the way we engage and enroll patients. Specifically, these technologies are leveraging machine learning algorithms and structured eligibility questionnaires in order to enhance the qualification rate of recruited patients. This is done by delivering the most relevant studies to patients, and simplifying publicly listed studies to improve patient understanding. Moreover, the traditional recruitment model, which uses advertising to cast a large net, tends to be inefficient and expensive; the algorithmic model engages patients at the point of when they are most interested in learning about studies.
FDA defines patient centricity
In recent months, the FDA voiced its concerns regarding the way the industry is approaching patient centricity in clinical trials. According to some in the industry, patient-centric initiatives involve including the patient to design less burdensome studies, optimizing study protocols’ inclusion/exclusion criteria to enroll more patients, and creating a more engaging and convenient clinical trial environment for patients. However, the FDA has indicated that the industry does not fully grasp the concept of patient centricity, which includes involving the patient to design studies focused on generating outcomes that are clinically meaningful to patients, and leveraging validated research methodologies when defining clinical measures.
mHealth advances in clinical trials
There have been quite a few advances on the mHealth and wearables front. The FDA has suggested that its thinking on the topic of mHealth is aligned with a new global guidance on Software as a Medical Device (SaMD), and sponsors can use this guidance in order to establish their own feasibility criteria to evaluate wearables in clinical trials.
The Sleep Apnea Association is conducting its first mHealth study using Apple Research Kit to measure sleep outcomes in patients via a bring your own device (BYOD) model. Additionally, Clinical Ink published data on the impact of mHealth on patient engagement and subject dropout, offering a glimpse of the promise that mHealth and wearables offer in clinical trial settings.
What’s to come in 2017?
On the quality front, we will likely start seeing the initiation and implementation of new quality risk management infrastructures (but no real data on the impact these infrastructures are having on quality), and new sets of risk and performance indicators. On data aggregation, we will probably start seeing the emergence of case studies with data on predicting clinical operational outcomes. On endpoint adjudication, it is likely that we will see some form of guidance emerging from expert communities or non-profits on how to incorporate adjudication committees and adjudication charter templates. On subject enrollment, we hope to see data on patient behavioral outcomes in digital settings. In mHealth, we will probably see more case studies and data on how wearables are impacting clinical study and patient outcomes.

Source: http://www.appliedclinicaltrialsonline.com/clinical-trial-innovation-what-s-come-2017

Clinical Trials - Feb 2017

As per ClinicalTrial.gov dated Feb'17, there are 237,945 studies registered across 196 countries

- 36% of 237,945 studies are registered in the US only
- 47% in Non-US only
- 6% in both US and non-US
- 12% other

 

World 237,945
Africa 5,750
Central America 2,461
East Asia 24,107
Japan 4,460
Europe 67,078
Middle East 9,630
North America 110,391
Canada 16,503
Greenland 1
Mexico 2,729
United States 99,738
North Asia 4,203
Pacifica 5,957
South America 7,863
South Asia 3,650
Southeast Asia 4,792

Thursday, 15 December 2011

Lipitor patent expires, all eyes on Ranbaxy generic

For the Ranbaxy brass, the past week must have been the toughest in their career as they attempted to patch up with the drug regulator of United States to facilitate the exclusive launch of a low-cost version of Pfizer’s blockbuster drug Lipitor, after its patent protection expired on November 30, in the US.

An approval from the Food and Drugs Administration (FDA) was the only hurdle before Ranbaxy, the generic company that had successfully challenged Lipitor patent and secured a six-month exclusive right to sell the low-cost version of the $10.7-billion medicine in the US market after patent expiry, to monetise this opportunity.

Ranbaxy is expected to earn $600 million if it manages to sell the Lipitor generic, exclusively for six months.

Watson, the company other than Pfizer and Ranbaxy that can market a low-cost Lipitor by virtue of an agreement with Pfizer during the exclusivity period, announced the launch of its product on Wednesday. Unlike Ranbaxy, Watson did not require FDA approval, as the company was merely marketing the medicine produced and supplied by Pfizer.

Pfizer has also reduced the price of Lipitor to minimise the effect of the shift in prescriptions to the low-cost versions.

Ranbaxy’s ability for timely launch of this product came under doubt after two of its manufacturing facilities in India (one of which was to supply Lipitor generic) were placed under an export restriction in 2008 after the FDA raised some serious compliance issues. As a result, all new product approvals from these plants were put on hold and export of 30 drugs banned.

Now, it is the turn of Ranbaxy to secure approval and start shipping its version of Lipitor, the cheapest among the three, to the US market. Analysts expect the drug to fetch $600 million in six months, though the sheen of its profits may fade if the company pays a good portion of that as penalty to FDA to secure marketing approval.

The company had not announced the status of FDA approval till we went to press.

“Half of the revenues from Lipitor generic sales will be profit for Ranbaxy. However, it is a one-time opportunity and will not have a positive impact on the long-term performance of the company,” said Ranjit Kapadia, an analyst with Centrum Broking.

“Ranbaxy may miss a few days if the (FDA) approval comes late, but it will not impact the (six month) period of exclusivity, as its product exclusivity will kick in only from the day the company launches its product in the US,” said an industry expert, who did not wished to be named.

According to him, what will help Ranbaxy is the specific clause in the Medicare Modernisation Act (passed by the United States in 2003), which excludes all six-month exclusivity approvals gained before the enactment of the law from its stipulations. The MMA has detailed mention of the conditions under which a drug company may have to forfeit its six month marketing exclusivity if it fails to get necessary approvals on time. The Ranbaxy Lipitor challenge predates MMA.

Ranbaxy and its majority stake holder, Japanese drug major Daiichi Sankyo, had repeatedly expressed confidence over their ability to monetise the six month market exclusivity.

Ranbaxy’s approval status remaining a question, the industry was abuzz over the various marketing strategies the company might have looked at to minimise the loss of its exclusive marketing opportunity.

Daiichi Sankyo, in a recent communication, had stated Ranbaxy will start selling Lipitor generic in the US from November 30.

Earlier reports also had hinted that the company plans to manufacture the drug at its site in New Brunswick, New Jersey, US with raw materials sourced from approved sources.

Meanwhile, Pfizer hopes to hold on to the brand popularity by announcing patient friendly schemes to make consumers stick to their version of the drug even after the introduction of a low-cost variant.

The company has also, as noted earlier, authorised US-based generic drug maker Watson Pharmaceuticals to sell the “authorised generic” of Lipitor after patent expiry. US laws permit the patent holder to allow one such “authorised generic” to be marketed during the six-month exclusivity period.

Tuesday, 9 November 2010

Update 1:Patent enforceability US Case (Temodar)

CAFC reversed district court's findings on patent enforceability of 5,260,291 (assigned to schering)


I have already blogged on this (Link 1) (Link 2

Thursday, 1 April 2010

News today: Settlement & Litigations

1) Sanofi settlement with generic drug companies - Oxaplatin [Link]

2) Eli Lilly - Gemcitabine district court decision; Product patent (4808614) held valid and MOT patent (5464826) held invalid [Link]

3) In case between Teva/Duramed Vs Watson, District court of Nevada granted Teva summary judgement on validity of patent [7,320,969] pertaining to Seasonique (Women contraceptive: Levonorgestrol and ethinyl estradiol) [Link]

Wednesday, 17 March 2010

Update:Patent enforceability US Case (Temodar)

Teva, in press release, said that the parties to the patent litigation of Barr vs Schering related to Temodar [temozolomide] have entered into an agreement pending resolution of Schering’s appeal to the Federal Circuit of the U.S. District Court’s decision holding the ‘291 Patent unenforceable.

Under the terms of the agreement, subject to limited exceptions, Teva will only market a generic product should the Federal Circuit uphold the District Court’s decision. Furthermore, the agreement grants Teva the right to commence selling its generic product as of August 2013, during the period of Schering’s pediatric exclusivity.

Earlier Post [Link]

Sunday, 31 January 2010

Patent enforceability US Case [CRCT Vs Barr]

Temodar [temozolomide] is marketed by Merck/Schering and is used for treatment of brain cancer [glioblastoma multiforme and refractory anaplastic astrocytoma]. Barr Pharma had filed ANDA seeking approval to market generic version of temozolomide. As the product patent has not expired and listed in Orange book, Barr has filed ANDA with PIV certification on product patent.

Product patent details
Patent No: US 5,260,291; Assignee: Cancer Research technology UK Ltd
Title: Tetrazine derivatives
US Expiry date: Aug 11, 2013
Patent claims original priority of first application filing date: 23/08/1982 and patent term extended around 3 years

Merck Schering is exclusive licensee of this patent ['291]

Upon filing ANDA with PIV certification on product patent, CRT has sued Barr (Teva) and consequently ANDA approval stayed for 30 months. In complaint, CRT argued that Barr's ANDA will infringe claims '291 patent. In answers, Barr has raised enforceability of '291 patent.

Defendant argument on Patent unenforceability
1) Doctrine of prosecution laches: This patent application's prosecution involves 11 patent application and 10 abandonment is unreasonable delay and no substantive prosecution for decades.
2) Inequitable conduct: Applicant was in possession of Human clinical trial data during the patent application prosecution and withheld from PTO

Court agrees on both arguments of defender by analyzing various facts and evidences submitted during trial.

Merck will appeal this decision

Links

Wednesday, 13 January 2010

Erlotinib Indian Case

Erlotinib Case Summary

Erlotinib is marketed by OSI Pharma in US and by Roche in non US areas.

Pre grant opposition of product patent 537/DEL/1996

Erlotinib infringement case:
Erlotinib product patent is granted in India [Patent No:IN196774, Application No:537/DEL/1996] equivalent to US 5747498. Cipla announced launch of generic version of Tarceva [Erlotinib]. Roche sued Cipla in Delhi High court for infringement of product patent ['774] and to grant injunction to market cipla product.

Injunction was denied to Roche

Pre grant oppositions of Polymorph related patents
OSI has filed to two patent applications, which are equivalent to US 6900221
IN/PCT/2002/507/DEL [Cipla Vs OSI] granted as ??
IN/PCT/2002/497/DEL [Cipla Vs OSI] granted as ??

Wednesday, 6 January 2010

Power Plants for Liver Diseases

1. Karisalankanni:

Botanical Name: Eclipta prostrata
Used for treatment of:
 Liver disease (herb of choice for liver diseases)
 Ulcer
 Jaundice
 Fatty liver
 Splenomegaly
 Hemorrhoids and
 Indigestion

Source: http://ayurvedichomeremedies.blogspot.com/2007/11/liver-and-ayurveda.html

2. Kizhanelli:

Botanical Name: Phyllanthus Nruri
Family: Euphorliaceae
Hindi: Buin anvalah, Jar – amla
Telugu: Nelavusari
Tamil: Kizha Nelli, Kizkhai Nelli, Kilkkay nelli, Kizvai nelli
Used for treatment of:
 Jaundice
 Gonorrhea
 Urinary tract
 Frequent menstruation
 Dysentery
 Diabetes
 hepatitis B
 Also antifungal, antiviral, and anticancerous properties

Source: http://ayurvedham.com/english/tag/keezhanelli

Blog Archive