Data Mining

In this chapter we use the term "data mining" for any computer method of automatic and continuous analysis of data, which turns it into useful information. Data mining can clearly be used on any data set, but the approach seems particularly valuable when the amount of data is large and the possible relationships within the data set numerous and complex. Although data mining of drug utilisation information, and other relevant data sets such as those relating to poisoning, will add greatly to pharmacovigilance, this has not yet been done to our knowledge. Our work in this area is preliminary and will not be referred to here.

In principle the Uppsala Monitoring Centre (UMC) has been doing data mining since the mid 1970s, using an early relational database. As with many automated systems, the relational database to a very large extent replicated a manual approach. In this instance it was the Canadian "pigeon hole'' system (Napke, 1977), where reports were physically assigned a slot, which encouraged visual inspection. Thus, observation could be made of when certain categories of report were unexpectedly high.

From the UMC database, countries in the WHO Programme for International Drug Monitoring have been provided with information, reworked by the UMC, on the summarised case data that is submitted from each national centre. This information has been presented to them according to agreed categories and classifications as determined among Programme members from time to time. This kind of system suffers from the following limitations:

* It is prescriptive, the groupings being determined on what is found broadly useful by experience.

* Each category is relatively simple, but the information beneath each heading is complex, and formatted rigidly.

* There is no indication of the probability of any relationship other than the incident numbers in each time period.

This system does not even have all the user friendliness of the pigeon hole system, which allowed a user to visually scan the amount of reports as they were filed to see the rate of build up in each pigeon hole. Admittedly, the sorting was relatively coarse, but the continuous visual cue given by the accumulation of case reports was very useful.

In improving on the pigeon hole system one can imagine a computer program being able to survey all data fields looking for patterns, really just any relationships that stand out as being more frequent than normal between any number of data fields. This ambitious goal might also be linked with some probability of the link(s) not being present by chance, given the background of the total data set.

At the other end of the spectrum one might merely ask the question: "Are there any drugs which seem to be more or less probably linked to a reported adverse reaction term in this data set?'' This latter question may be tackled in an automated way by calculating the proportional reporting ratio (PRR), which is akin to a relative risk, a reporting odds ratio (OR) or the Yules Q, for all medicinal products and adverse drug reaction (ADR) terms. These are point estimates giving the relativity of reporting of a particular drug and ADR to all the ADRs for that drug, compared with similar considerations for all drugs and the target ADR and all ADRs. This can be done without the use of neural networks and using the x2 test for probability and other methods of precision estimation.

Bayesian logic may also be used, finding the prior probability of an occurrence of the drug amongst the case data and then the posterior probability of the drug linked with the specific ADR. Bayesian logic is intuitively correct for a situation where there is a need continuously to reassess the probability of relationships with the acquisition of new data and over time. Bayesian logic does not impose any rigidity other than deciding on the initial a priori level of probability, and then allows the acquisition of data to modify to a posterior probability. This process can be iterated continuously.

The next level of complexity is to consider the effects of adding other objects as variables. Stratification or the use of Bayesian complex variable analysis complicates the calculations and the computational complexity to a level that makes a neural network architecture an advantage. A neural network is a matrix of interconnected nodes. Each node is connected to all other nodes, and represents one data field of specified type. A neural network learns according to the data provided to it and according to its predetermined logic. For evaluating complexity in a very large data set, the WHO Collaborating Centre for International Drug Monitoring (the UMC) has chosen a Bayesian confidence propagation neural network (BCPNN) as the most favourable platform for development in this area. The use of Bayesian logic seems natural where the relationship between each node will alter as more data are added. The neural network "learns" the new probabilities between nodes, and can be asked how much those probabilities are changed by the addition of new case data or by the consideration of multiples rather than singlets or duplets.

Drug Free Life

Drug Free Life

How To Beat Drugs And Be On Your Way To Full Recovery. In this book, you will learn all about: Background Info On Drugs, Psychological Treatments Statistics, Rehab, Hypnosis and Much MORE.

Get My Free Ebook


Post a comment