In quality management standards for large environment forensics, the term administrative review denotes an evaluation of the report and the supporting documentation of a large environment for consistency with the policies, as well as for editorial correctness. Usually, an administrative review of each and every case file and report in large environment forensics is conducted and documented with a single aim of ensuring that both conclusions and the supporting data are not only reasonable but worthwhile and lay within the frames of scientific knowledge. Often, administrative reviews are governed by standards. In most cases, administrative reviews consist of several elements (Federal Bureau of Investigation. 2000).
These include a review of the case file, as well as the final report for clerical errors and the elements of the case work reports such as the case identifier, description of the evidence examined, technology definition, amplification system, and the results or conclusions. However, in order for the elements of the casework reports to be included, they must not only be accurate but updated as well. An administrative review includes a review of the chain of evidence’s custody, disposition, and procedure to document its completion (Council of the Inspectors General on Integrity and Efficiency, 2012).
Technical review is a term used in forensics to define an evaluation of reports, notes, data, as well as any other document related to an organization with an aim of ensuring that there is not only an appropriate but also a sufficient basis for the scientific conclusion. Often in forensics, a technical review is done or is conducted by a technical reviewer. A technical reviewer is an employee or an independent person either currently or previously qualified as an analyst in the corresponding methodology. The technical reviewer’s duty is to perform a technical review, if there is no author, of the applicable report and its contents. Whether a team within an enterprise or a team hired from a forensic response organization is engaged, technical reviews of each and every case file and report must be conducted and documented with an aim of ensuring that conclusion and supporting data within the scientific knowledge constraints are rational (Murphy & Morrison, 2007). This is done alongside administrative reviews. The review standards demand a person mandated with the responsibility of carrying out technical reviews have previous experience working as an analyst and be qualified appropriately in relation to the used methodology. Relating this to the context of large environment forensics, the team within the enterprise or the deployed team from a forensic response organization must consist of individuals who have adequate knowledge and skills in the methodology under review. Failure to do this may cause the lack the objectivity and sense in the review.
In addition, the standards require that technical review carried out and completed successfully be documented. Further, the standards require that the technical review of the large environment review work includes the following elements;
Foremost, it is a review of each and every case note, worksheet, and data occurring in electronic forms such as images and electropherograms supporting the conclusions (Mealy & Gottuk, 2011). The next element that should be included in the technical review is a review of every Large Environment Forensics type with an aim of verifying that these types are supported either by data collected in a raw form or analyzed saved data in the form of electropherograms or images. Moreover, a technical review should include a review of each and every profile in order to verify that all the inclusions, as well as exclusions, are correct or if they are applicable at all. In addition to this, the standards require that the technical review have a review of any results which are seemingly inconclusive in nature for compliance with the large environment forensics guidelines.
Also, a technical review ought to have a review of every control, be it standards of the internal lane or athletic ladders which serve as verification that expected results have been achieved. Another constituent of a technical review is a review of the content of the terminal report verifying that the results or the conclusions support collected data. Data in this case address either every item tested or the reports’ probative fraction.
Validation testing is used in forensics to denote a quality of the assurance measures. It is commonly used to monitor the performance, as well as identify areas in which improvement is required. Usually, validity tests are classified into two categories. Foremost is an internal validity tests, which is produced by the agency undergoing the validation test. The second category is the external validity test. This particular test may be open or closed. Such a test is obtained from an approved validity or proficiency provider.
On external validity tests, an organization that uses a team approach to casework examination has the choice to do so. However, each and every analyst, technician, as well as, technical reviewer must be tested for validity more than once a year in each of the organizational technologies.
As designated by the technical leader of the forensic response group, technical reviewers, analysts, technicians, as well as other personnel have to go through an external validity testing in each and every technology that is performed fully and in which these individuals participate in the case study carried out semi-annually. As used in this case, semi-annual describes an incident that occurs twice per calendar year. The first of the two events takes place within the first half (six months) of a calendar year while the second takes place within the second half of the same year. The difference between these events should not be less than four months. According to White (2010), the difference between these two intervals should not also exceed eight months. While the validation testing is an open testing program, it is often forwarded to the validity testing provider singly to include it in the external summary report that the provider published (Federal Bureau of Investigation, 2000).
The validity of the subjects of the response team that routinely utilize manual, as well as, automated methods is tested in each method at least once in a year with accord to the extent to which they engage themselves or participate in the case work. For the fundamental purposes of monitoring compliance with the requirements of the twice per year validation testing, the response group should adequately give definitions, document, and, in a consistent manner, use the date in which the validation test is conducted as the reception, assignation, submission, or the due dates.
The technical leader of the response team ought to be informed of the results of each group member, and this notification should be adequately documented. The technical leader has a duty to inform the casework administrator regarding every discrepancy related to non-administrative work which affects the typing results and the conclusions taking the time of discovery into account.
Review of Quality System
Quality system is a term used in forensic response to define the organizational structure, responsibilities, processes, procedures, as well as the resources necessary for the implementation of quality management. The quality system applicable to the large environment forensics should be reviewed once per calendar year independently of an audit (Federal Bureau of Investigation. 2000). The completion of the quality system review should be done under the direction of the technical leader of the forensic response group, and any approval by the technical leader ought to be sufficiently documented.