R&D is creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society thus in R&D investigative activities that a business chooses to conduct with the intention of making a discovery that can either lead to the development of new products or procedures, or to improvement of existing products or procedures.
During the study design phase, QC personnel provide an independent review of the approved proposed protocol. The QC plan includes comparison of the study's CRF to the objectives set forth in the protocol to ensure that it is designed to collect all necessary data. A requirement to review CRF completion guidelines is also an element of the QC plan.
For overall site management, a complete QC plan addresses the following:
investigator selection and qualifications
– experience in conducting clinical trials – experience with the specific indication – not on the FDA's restricted or debarred lists – adequate staff and facilities – personal involvement
study conduct (monitoring)
– subject informed (signed informed consent form) – subject's eligibility (inclusion/exclusion) – protocol compliance – adverse events (AEs) and concomitant medication – drug accountability and storage
source document verification
– medical records
– lab data
– progress notes
– diagnostic tests
During the data management process, the accuracy of the initial data entry is verified by an independent entry of the same data and a subsequent comparison of both sets of data for nonagreement. The reality of the data is checked with a preprogrammed logic check program and a subsequent manual review. The database entries are then QC'd versus the CRFs. The TLGs that are generated as part of a statistical analysis of the data are also inspected to ensure their accuracy, as is any text in a CSR that refers to the TLGs.
The QA activities to be conducted during a specific clinical trial are included in a QA audit plan. These activities include the number of investigator sites, selection criteria, and vendors to be audited, such as labs and drug packaging and distribution providers. This plan also specifies what internal processes of the study will be audited from initial study design, site and data management, statistical analysis, and the final CSR. It specifies audit team members and auditees for each study stage, as well as the standards against which the audit will be conducted, such as the protocol, CRF completion guidelines, SOPs, ICH/GCP guidelines, and FDA regulations.
Audits must also consider the standards of countries other than the United States, such as the recently adopted EU Clinical Trial Directives 2001/20/EC and 2005/28/EC.5
A thorough QA audit plan also clearly states the documents to be provided by the auditee, as well as the location, date, and expected duration of the audits. Preparation for QA audits should include review of the approved protocol and amendments, SOPs (both general and study-specific), any specialized training associated with the study, annotated CRFs, and the statistical analysis plan (SAP).
Internal process audits are another important QA responsibility. Internal audits review all the drug development processes employed across several studies to determine if there are systemic problems. This includes a review of employee training, compliance with SOPs and regulatory requirements, and documented evidence that QC was appropriately conducted on the output of each internal process, as well as the final deliverable to a client.
Site management metrics
Internal audits of the site selection and management processes ensure that qualified investigators are selected, that they have adequate facilities and adequately trained staff, and that the study was conducted in compliance with the protocol and all appropriate regulations.3 Several metrics commonly evaluated by internal process audits after the study has begun include:
percentage of monitoring visits completed on time
percentage of evaluable subjects (no protocol violations)
percentage of serious adverse events (SAEs) reported within 24 hours to an Institutional Review Board (IRB) and sponsor
percentage of properly executed informed consent forms
number of queries/CRF pages reviewed
number of missing data entries/CRF pages reviewed.
Computer Systems Validation
Computer systems validation examines all aspects of the data handling computer systems (hardware and software) to ensure the accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records. This includes initial installation and procedures that document how changes to a computer system are justified, approved, and implemented.
The validation process begins with examining user requirements, the results of the initial hardware installation qualification (IQ) tests, the operational qualification (OQ) tests, and the qualification and training of user personnel. The user acceptance test results (Performance Qualification) are then compared to the user requirements to ensure that these requirements are met. Having assurance that the data handling computer system is validated, data can then be entered.
Data management QC
Since an average error rate for keying text or numbers is about 1 per 300 keystrokes, the entered data is QC'd by having an independent data entry person enter the same data.2 Both sets of data are compared electronically, and discrepancies are resolved by a senior data entry person. After all of the data has been entered and all discrepancies and questions resolved, the database is QC'd by comparing the database to the CRFs from which the data was entered.
Data management metrics
Examples of data management metrics for QA are:
percentage of database errors
percentage of queries manually generated
time from last patient out to database lock
number of times a locked database is opened.
Data management QA
Data entry and the database QC process are other critical areas of the data management process that are audited by QA personnel. The audits review the documented evidence that shows the data accuracy and integrity were verified and checked manually, independently, and programmatically to ensure the data were logical.1 These audits also ensure that all data queries are resolved and that the overall database QC review was conducted according to the QC SOP.
Statistical analysis QC
After a study database has undergone a QC review, it is exported into a SAS (statistical analysis system) to develop analytical programs that create data TLGs that are to be included in a CSR. The TLGs are QC'd and validated by having independent programmers create programs for the same TLGs, and all discrepancies are then resolved.
Statistical analysis QA
QA of the statistical analysis process ensures SAS programs are validated for the generation of all TLGs by checking that all the requirements were met and boundary conditions were tested. QA also verifies that the SAP was developed according to the processes defined in the SOPs and that all statistical analysis plans are approved by the appropriate authority.
In addition to reviewing the statistical analysis process, QA also inspects a predetermined sample of TLGs. Numbers are checked against database listings, and tables are reviewed against format requirements specified in the SAP. The QA report will document the following information:
percentage of TLGs with numerical or formatting errors
percentage of SAS programs adequately validated
time from database lock to final TLGs.
Study site audits
The QA group conducts site audits throughout the course of a trial to assess protocol and regulatory compliance, to ensure that the safety and welfare of subjects are addressed, and to confirm that problems reported by study monitors have been resolved. QA's criteria for site selection include:
high patient enrollment
high staff turnover
abnormal number of AEs (high and low)
high or low subject enrollment rates that are unexpected given the research site's location and demographics.
Site audits ensure adequate documentation of case histories (source documents), such as medical records, progress notes, hospital charts, drug accountability records, ECGs, laboratory test results, SAEs, and informed consents. Audits examine whether all clinical tests were performed at the time specified in the study protocol, and review specimen collection, storage and shipping packages (if applicable), and the timeliness of review of clinical test results.
QA site audits evaluate the timeliness of entering data into a CRF, and examine the accuracy of the data by comparing them to their respective source documents mentioned above. Audits also ensure all investigational product received by a site is adequately accounted for.
Corrective and preventative action process
The purpose of a corrective and preventative action process is to ensure that complaints, discrepancies, and noncompliances are visible, prioritized, and tracked, and that the root cause is determined and resolved. It also provides a system to track issues of nonconformity that have not been resolved. This process requires identifying a person responsible for defining and implementing corrective action.
Continual improvement process
QA also has a critical introspective role to continually monitor and evaluate its own activities and to improve all drug development processes. This continual process of improvement tracks and reports on metrics for key activities and deliverables of drug development, keeping in mind the adage that "what gets measured, gets managed." Other inputs to process improvement include a formal debriefing after project close, client and employee satisfaction surveys, and client audits.
Managing the quality of clinical data does the following:
ensures management of compliance with the protocol, SOPs, and GCPs
enables systemic problems to be resolved before the end of the study
helps reduce data queries (industry average = $150/query)
identifies ways to reduce cycle times for various processes
ensures data integrity throughout the study's course and that the data collected are the data required by the protocol
ensures the accuracy and consistency of data from entry into the CRF to final datasets reported in the final CSR
plays a critical role in dealing with instances of nonconformity while carrying out clinical trials.
1. I.J. Townshend and A.F. Bissel, "Sampling for Clinical Report Auditing," Statistician 36, 531–539 (1987).
2. R.K. Rondel and S.A. Varley, Clinical Data Management (John Wiley & Sons, New York, 1993).
3. U.S. Code of Federal Regulations Title 21, Part 312.
4. ICH/GCP Consolidated Guidelines, E6.
5. EU Clinical Trial Directives 2001/20/EC and 2005/28/EC.