Both forensics and e-discovery are secondary processes from a business perspective.

Forensic Analysis of Mobile Malware

In Mobile Malware Attacks and Defense, 2009

Operating Systems (OS) and File Systems (FS)

The forensic process varies greatly from computer devices to mobile devices due to the nature of the storage medium. Most mobile devices in current deployment use volatile memory to store user data. Computers generally use nonvolatile memory in the form of hard drives for their storage medium (although this is changing in some cases with many newer model devices integrating large format nonvolatile memory to enable the storage of music and video files).

When a device that uses nonvolatile memory is turned off, little generally happens to the storage medium. Devices that use volatile memory sources (such as most mobile devices currently in use) lose data when powered off. Even modern flash storage devices that are capable of storing data without power lose information as the device is divided, in order to use this memory in a manner that simulates both volatile and nonvolatile storage at the same time. The memory in these systems is generally backed up though the use of an internal battery, which, if depleted can result in lost data. Forensically, evidence trails on mobile devices can be destroyed though power loss. As such, it is essential to ensure that even a device that is turned off needs to have a power supply attached. This is essential if the investigator is to ensure that the data on the device is maintained in a forensically sound manner.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492980000094

Domain 9

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Second Edition), 2012

Forensics

Digital forensics provides a formal approach to dealing with investigations and evidence with special consideration of the legal aspects of this process. Forensics is closely related to incident response, which is covered both in this chapter and in Chapter 8, Domain 7: Operations Security. The main distinction between forensics and incident response is that forensics is evidence-centric and typically more closely associated with crimes, while incident response is more dedicated to identifying, containing, and recovering from security incidents.

The forensic process must preserve the “crime scene” and the evidence in order to prevent unintentionally violating the integrity of either the data or the data's environment. A primary goal of forensics is to prevent unintentional modification of the system. Historically, this integrity focus led investigators to cut a system's power to preserve the integrity of the state of the hard drive, and prevent an interactive attacker or malicious code from changing their behavior in the presence of a known investigator. This approach persisted for many years but is now changing due to antiforensics.

Exam Warning

Always ensure that any forensic actions uphold integrity and are legal and ethical.

Antiforensics make forensic investigation difficult or impossible. One antiforensic method is malware that is entirely memory-resident, and not installed on the disk drive. If an investigator removes power from a system with entirely memory-resident malware, all volatile memory including RAM is lost, and evidence is destroyed. Because of the investigative value of information available only in volatile memory, the current forensic approach favors some degree of live forensics that includes taking a bit by bit, or binary, image of physical memory; gathering details about running processes; and gathering network connection data.

The general phases of the forensic process are the identification of potential evidence, the acquisition of that evidence, analysis of the evidence, and finally production of a report. Acquisition will leverage binary backups and the use of hashing algorithms to verify the integrity of the binary images, which we will discuss shortly. When possible, the original media should not be used for analysis; instead, a forensically sound binary backup should be used. The final step of the forensic process involves the creation of a forensic report that details the findings of the analysis phase.

Forensic Media Analysis

In addition to the valuable data gathered during the live forensic capture, the main source of forensic data typically comes from binary images of secondary storage and portable storage devices such as hard disk drives, USB flash drives, CDs, DVDs, and possibly associated cellular phones and mp3 players. The reason that a binary or bitstream image is used is because an exact replica of the original data is needed. Normal backup software will only capture the active partitions of a disk and, further, only that data marked as allocated. Normal backups could well miss significant data, such as data intentionally deleted by an attacker, so binary images are used. In order to more fully appreciate the difference between a binary image and a normal backup, the investigator needs to understand the four types of data that exist.

Allocated space—Portions of a disk partition that are marked as actively containing data.

Unallocated space—Portions of a disk partition that do not contain active data. This includes memory that has never been allocated, and previously allocated memory that has been marked unallocated. If a file is deleted, the portions of the disk that held the deleted file are marked as unallocated and available for use.

Slack space—Data is stored in specific size chunks known as clusters. A cluster is the minimum size that can be allocated by a file system. If a particular file, or final portion of a file, does not require the use of the entire cluster, then some extra space will exist within the cluster. This leftover space is known as slack space; it may contain old data or can be used intentionally by attackers to hide information.

“Bad” blocks/clusters/sectors—Hard disks routinely end up with sectors that cannot be read due to some physical defect. The sectors marked as bad will be ignored by the operating system since no data could be read in those defective portions. Attackers could intentionally mark sectors or clusters as being bad in order to hide data within this portion of the disk.

Given the disk-level tricks that an attacker could use to hide forensically interesting information, a binary backup tool is used rather than a more traditional backup tool that would only be concerned with allocated space. There are numerous tools that can be used to create this binary backup, including free tools such as dd and windd, as well as commercial tools such as Ghost (when run with specific non-default switches enabled), AccessData® FTK, or Guidance® Software EnCase®.

Learn by Example: Live Forensics

Although forensics investigators traditionally removed power from a system, the typical approach now is to gather volatile data. Acquiring volatile data is called live forensics, as opposed to the post mortem forensics associated with acquiring a binary disk image from a powered-down system. One attack tool stands out as having brought the need for live forensics into full relief.

Metasploit is an extremely popular free and open source exploitation framework. A strong core group of developers led by HD Moore have consistently kept it on the cutting edge of attack techniques. One of the most significant achievements of the Metasploit framework is the modularization of the underlying components of an attack. This modularization allows exploit developers to focus on their core competency without having to expend energy on distribution or even developing a delivery, targeting, and payload mechanism for their exploit; Metasploit provides reusable components to limit extra work.

A payload is what Metasploit does after successfully exploiting a target; Meterpreter is one of the most powerful Metasploit payloads. As an example of some of the capabilities provided by Meterpreter, Figure 10.4 shows the password hashes of a compromised computer being dumped to the attacker's machine. These password hashes can then be fed into a password cracker that would eventually figure out the associated password. Or the password hashes might be capable of being used directly in Metasploit's PSExec exploit module, which is an implementation of functionality provided by the SysInternal® (now Microsoft) PSExec, but bolstered to support pass-the-hash functionality. Information on Microsoft's PSExec can be found at http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx. Further details on pass-the-hash techniques can be found at http://oss.coresecurity.com/projects/pshtoolkit.htm.

In addition to dumping password hashes, Meterpreter provides such features as:

Command execution on the remote system

Uploading or downloading of files

Screen capture

Keystroke logging

Disabling the firewall

Disabling antivirus

Registry viewing and modification (as seen in Figure 10.5)

And much more, as Meterpreter's capabilities are updated regularly

In addition to the above features, Meterpreter was designed with detection evasion in mind. Meterpreter can provide almost all of the functionalities listed above without creating a new file on the victim system. Meterpreter runs entirely within the context of the exploited victim process, and all information is stored in physical memory rather than on the hard disk.

Imagine an attacker has performed all of the actions detailed above, and the forensic investigator removed the power supply from the compromised machine, destroying volatile memory. There would be little to no information for the investigator to analyze. The possibility of Metasploit's Meterpreter payload being used in a compromise makes volatile data acquisition a necessity in the current age of exploitation.

Network forensics

Network forensics is the study of data in motion, with special focus on gathering evidence via a process that will support admission into court. This means the integrity of the data is paramount, as is the legality of the collection process. Network forensics is closely related to network intrusion detection; the difference is that the former is legal focused, and the latter is operations focused.

The SANS Institute has described network forensics as [4]:

Traditionally, computer forensics has focused on file recovery and filesystem analysis performed against system internals or seized storage devices. However, the hard drive is only a small piece of the story. These days, evidence almost always traverses the network and sometimes is never stored on a hard drive at all.

With network forensics, the entire contents of e-mails, IM conversations, Web surfing activities, and file transfers can be recovered from network equipment and reconstructed to reveal the original transaction. The payload inside the packet at the highest layer may end up on disc, but the envelope that got it there is only captured in the network traffic. The network protocol data that surrounded each conversation is often extremely valuable to the investigator. Network forensics enables investigators to piece together a more complete picture using evidence from the entire network environment.

Forensic software analysis

Forensic software analysis focuses on comparing or reverse engineering software; reverse engineering malware is one of the most common examples. Investigators are often presented with a binary copy of a malicious program and seek to deduce its behavior. Tools used for forensic software analysis include disassemblers and software debuggers. Virtualization software also comes in handy, as investigators may intentionally infect a virtual operating system with a malware specimen and then closely monitor the resulting behavior.

Embedded device forensics

One of the greatest challenges facing the field of digital forensics is the proliferation of consumer-grade electronic hardware and embedded devices. While forensic investigators have had decades to understand and develop tools and techniques to analyze magnetic disks, newer technologies such as solid state drives (SSDs) lack both forensic understanding and forensic tools capable of analysis.

Mathias discussed this challenge in his dissertation [5]:

The field of digital forensics has long been centered on traditional media like hard drives. Being the most common digital storage device in distribution it is easy to see how they have become a primary point of evidence. However, as technology brings digital storage to be more and more of larger storage capacity, forensic examiners have needed to prepare for a change in what types of devices hold a digital fingerprint. Cell phones, GPS receiver and PDA (Personal Digital Assistant) devices are so common that they have become standard in today's digital examinations. These small devices carry a large burden for the forensic examiner, with different handling rules from scene to lab and with the type of data being as diverse as the suspects they come from. Handheld devices are rooted in their own operating systems, file systems, file formats, and methods of communication. Dealing with this creates unique problems for examiners.

Incident response

Forensics is closely tied to incident response, which we discussed in Chapter 8, Domain 7: Operations Security. This chapter focuses on the legal aspects of incident response, including incident response documentation. Responding to incidents can be a highly stressful situation. In these high-pressure times it is easy to focus on resolving the issue at hand, overlooking the requirement for detailed, thorough documentation. If every response action taken and output received is not being documented, then the incident responder is working too quickly and is not documenting the incidents to the degree that may be required by legal proceedings. It is difficult to know at the beginning of an investigation whether or not the investigation will eventually land in a court of law. An incident responder should not need to recall the details of an incident that occurred in the past from memory; documentation written while handling the incident should provide all necessary details.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499613000108

Domain 7: Security Operations (e.g., Foundational Concepts, Investigations, Incident Management, Disaster Recovery)

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Third Edition), 2016

Forensics

Digital forensics provides a formal approach to dealing with investigations and evidence with special consideration of the legal aspects of this process. Forensics is closely related to incident response, which is covered later in this chapter. The main distinction between forensics and incident response is that forensics is evidence-centric and typically more closely associated with crimes, while incident response is more dedicated to identifying, containing, and recovering from security incidents.

The forensic process must preserve the “crime scene” and the evidence in order to prevent unintentionally violating the integrity of either the data or the data’s environment. A primary goal of forensics is to prevent unintentional modification of the system. Historically, this integrity focus led investigators to cut a system’s power to preserve the integrity of the state of the hard drive, and prevent an interactive attacker or malicious code from changing behavior in the presence of a known investigator. This approach persisted for many years, but is now changing due to antiforensics.

Exam Warning

Always ensure that any forensic actions uphold integrity, and are legal and ethical.

Antiforensics makes forensic investigation difficult or impossible. One antiforensic method is malware that is entirely memory-resident, and not installed on the disk drive. If an investigator removes power from a system with entirely memory-resident malware, all volatile memory including RAM is lost, and evidence is destroyed. Because of the investigative value of information available only in volatile memory, the current forensic approach favors some degree of live forensics that includes taking a bit by bit, or binary image of physical memory, gathering details about running processes, and gathering network connection data.

The general phases of the forensic process are: the identification of potential evidence; the acquisition of that evidence; analysis of the evidence; and production of a report. Acquisition will leverage binary backups and the use of hashing algorithms to verify the integrity of the binary images, which we will discuss shortly. When possible, the original media should not be used for analysis: a forensically sound binary backup should be used. The final step of the forensic process involves the creation of a forensic report that details the findings of the analysis phase.

Forensic Media Analysis

In addition to the valuable data gathered during the live forensic capture, the main source of forensic data typically comes from binary images of secondary storage and portable storage devices such as hard disk drives, USB flash drives, CDs, DVDs, and possibly associated cellular phones and mp3 players. The reason that a binary or bit stream image is used is because an exact replica of the original data is needed. Normal backup software will only archive allocated data on the active partitions of a disk. Normal backups could miss significant data that had been intentionally deleted by an attacker; as such, binary images are preferred.

Here are the four basic types of disk-based forensic data:

Allocated space—portions of a disk partition that are marked as actively containing data.

Unallocated space—portions of a disk partition that do not contain active data. This includes portions that have never been allocated, and previously allocated portions that have been marked unallocated. If a file is deleted, the portions of the disk that held the deleted file are marked as unallocated and made available for use.

Slack space—data is stored in specific size chunks known as clusters (clusters are sometimes also referred to as sectors or blocks). A cluster is the minimum size that can be allocated by a file system. If a particular file, or final portion of a file, does not require the use of the entire cluster then some extra space will exist within the cluster. This leftover space is known as slack space: it may contain old data, or can be used intentionally by attackers to hide information.

“Bad” blocks/clusters/sectors—hard disks routinely end up with sectors that cannot be read due to some physical defect. The sectors marked as bad will be ignored by the operating system since no data could be read in those defective portions. Attackers could intentionally mark sectors or clusters as being bad in order to hide data within this portion of the disk.

Given the disk level tricks that an attacker could use to hide forensically interesting information, a binary backup tool is used rather than a more traditional backup tool that would only be concerned with allocated space. There are numerous tools that can be used to create this binary backup including free tools such as dd and windd as well as commercial tools such as Ghost (when run with specific non-default switches enabled), AccessData’s FTK, or Guidance Software’s EnCase.

Learn By Example

Live Forensics

While forensics investigators traditionally removed power from a system, the typical approach now is to gather volatile data. Acquiring volatile data is called live forensics, as opposed to the post mortem forensics associated with acquiring a binary disk image from a powered down system. One attack tool stands out as having brought the need for live forensics into full relief.

Metasploit is an extremely popular free and open source exploitation framework. A strong core group of developers led by HD Moore have consistently kept it on the cutting edge of attack techniques. One of the most significant achievements of the Metasploit framework is the modularization of the underlying components of an attack. This modularization allows for exploit developers to focus on their core competency without having to expend energy on distribution or even developing a delivery, targeting, and payload mechanism for their exploit; Metasploit provides reusable components to limit extra work.

A payload is what Metasploit does after successfully exploiting a target; Meterpreter is one of the most powerful Metasploit payloads. As an example of some of the capabilities provided by Meterpreter, Figure 8.1 shows the password hashes of a compromised computer being dumped to the attacker’s machine. These password hashes can then be fed into a password cracker that would eventually figure out the associated password. Or the password hashes might be capable of being used directly in Metasploit’s PSExec exploit module, which is an implementation of functionality provided by SysInternal’s (now owned by Microsoft) PSExec, but bolstered to support Pass the Hash functionality. Information on Microsoft’s PSExec can be found at http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx. Further details on Pass the Hash techniques can be found at http://www.coresecurity.com/corelabs-research/open-source-tools/pass-hash-toolkit.

Both forensics and e-discovery are secondary processes from a business perspective.

Figure 8.1. Dumping Password Hashes with Meterpreter

In addition to dumping password hashes, Meterpreter provides such features as:

Command execution on the remote system

Uploading or downloading of files

Screen capture

Keystroke logging

Disabling the firewall

Disabling antivirus

Registry viewing and modification (as seen in Figure 8.2)

Both forensics and e-discovery are secondary processes from a business perspective.

Figure 8.2. Dumping the Registry with Meterpreter

And much more: Meterpreter’s capabilities are updated regularly

In addition to the above features, Meterpreter was designed with detection evasion in mind. Meterpreter can provide almost all of the functionalities listed above without creating a new file on the victim system. Meterpreter runs entirely within the context of the exploited victim process, and all information is stored in physical memory rather than on the hard disk.

Imagine an attacker has performed all of the actions detailed above, and the forensic investigator removed the power supply from the compromised machine, destroying volatile memory: there would be little to no information for the investigator to analyze. The possibility of Metasploit’s Meterpreter payload being used in a compromise makes volatile data acquisition a necessity in the current age of exploitation.

Network Forensics

Network forensics is the study of data in motion, with special focus on gathering evidence via a process that will support admission into court. This means the integrity of the data is paramount, as is the legality of the collection process. Network forensics is closely related to network intrusion detection: the difference is the former is legal-focused, and the latter is operations-focused. Network forensics is described as: “Traditionally, computer forensics has focused on file recovery and filesystem analysis performed against system internals or seized storage devices. However, the hard drive is only a small piece of the story. These days, evidence almost always traverses the network and sometimes is never stored on a hard drive at all.

With network forensics, the entire contents of e-mails, IM conversations, Web surfing activities, and file transfers can be recovered from network equipment and reconstructed to reveal the original transaction. The payload inside the packet at the highest layer may end up on disc, but the envelope that got it there is only captured in the network traffic. The network protocol data that surrounded each conversation is often extremely valuable to the investigator. Network forensics enables investigators to piece together a more complete picture using evidence from the entire network environment.” [2]

Forensic Software Analysis

Forensic software analysis focuses on comparing or reverse engineering software: reverse engineering malware is one of the most common examples. Investigators are often presented with a binary copy of a malicious program, and seek to deduce its behavior.

Tools used for forensic software analysis include disassemblers and software debuggers. Virtualization software also comes in handy: investigators may intentionally infect a virtual operating system with a malware specimen, and then closely monitor the resulting behavior.

Embedded Device Forensics

One of the greatest challenges facing the field of digital forensics is the proliferation of consumer-grade electronic hardware and embedded devices. While forensic investigators have had decades to understand and develop tools and techniques to analyze magnetic disks, newer technologies such as Solid State Drives (SSDs) lack both forensic understanding and forensic tools capable of analysis.

Vassilakopoulos Xenofon discussed this challenge in his paper GPS Forensics, A systemic approach for GPS evidence acquisition through forensics readiness: “The field of digital forensics has long been cantered on traditional media like hard drive. Being the most common digital storage device in distribution it is easy to see how they have become a primary point of evidence. However, as technology brings digital storage to be more and more of larger storage capacity, forensic examiners have needed to prepare for a change in what types of devices hold a digital fingerprint. Cell phones, GPS receiver and PDA (Personal Digital Assistant) devices are so common that they have become standard in today’s digital examinations. These small devices carry a large burden for the forensic examiner, with different handling rules from scene to lab and with the type of data being as diverse as the suspects they come from. Handheld devices are rooted in their own operating systems, file systems, file formats, and methods of communication. Dealing with this creates unique problems for examiners.” [3]

Electronic Discovery (eDiscovery)

Electronic discovery, or eDiscovery, pertains to legal counsel gaining access to pertinent electronic information during the pre-trial discovery phase of civil legal proceedings. The general purpose of discovery is to gather potential evidence that will allow for building a case. Electronic discovery differs from traditional discovery simply in that eDiscovery seeks ESI, or electronically stored information, which is typically acquired via a forensic investigation. While the difference between traditional discovery and eDiscovery might seem miniscule, given the potentially vast quantities of electronic data stored by organizations, eDiscovery can prove logistically and financially cumbersome.

Some of the challenges associated with eDiscovery stem from the seemingly innocuous backup policies of organizations. While long term storage of computer information has generally been thought to be a sound practice, this data is discoverable. To be discoverable, which simply means open for legal discovery, ESI does not need to be conveniently accessible or transferable. The onus falls to the organization to produce the data to opposing counsel with little to no regard to the cost incurred by the organization to actually provide the ESI.

Appropriate data retention policies as well as perhaps software and systems designed to facilitate eDiscovery can greatly reduce the burden felt by the organization when required to provide ESI for discovery. When considering data retention policies, consider not only how long must information be kept, which has typically been the focus, but also how long information needs to be accessible to the organization. Any data for which there is no longer need, should be appropriately purged according to the data retention policy. Data no longer maintained due to policy is necessarily not accessible for discovery purposes.

Please see the Legal and Regulatory Issues section of Chapter 2, Domain 1: Security and Risk Management for more information on related legal issues.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024379000084

iPod, Cell Phone, PDA, and BlackBerry Forensics

Littlejohn Shinder, Michael Cross, in Scene of the Cybercrime (Second Edition), 2008

Step 4: Documentation

As with any component in the forensic process, it is critical that you maintain your documentation and “chain of custody.” As you collect information and potential evidence, you need to record all visible data. Your records must document the case number and the date and time it was collected. Additionally, the entire investigation area needs to be photographed, which includes any devices that can be connected to the PDA or currently are connected to the PDA. Another part of the documentation process is to generate a report that consists of the detailed information that describes the entire forensic process you are performing. Within this report you need to annotate the state and status of the device in question during your collection process. The final step of the collection process consists of accumulating all of the information and storing it in a secure and safe location.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749276800008X

The Foundations of Digital Forensics

Larry E. Daniel, Lars E. Daniel, in Digital Forensics for Legal Professionals, 2012

4.3.3 Acquisition best practices

Acquisition is the part of the forensic process during which actual data is copied or duplicated. Following proper procedures is critical to ensure the integrity of evidence. The acquisition portion can be further broken down into two steps: duplication and verification.

Duplication: This is one step that is easily performed incorrectly, especially if it is performed by someone who is not trained to properly duplicate electronic evidence. The only accepted method for duplicating electronic evidence requires that the original be protected from any possibility of alteration during the duplication process. This requires the use of accepted tools and techniques that allow the duplication of the evidence in a forensically sound manner. Using nonforensic methods will always lead to modification of the original evidence and/or incomplete copies of the original evidence that cannot be verified using forensic methods.

Forensic method: The proper forensic method for duplicating evidence from a computer hard drive or other media storage device requires the use of write-blocking of the original storage device. Write-blocking can be accomplished either by using a physical hardware device that is connected between the original (source) and the copy (target) hard drive (see Figure 4.2) or by using a special boot media that can start a computer in a forensically sound manner.

Both forensics and e-discovery are secondary processes from a business perspective.

Figure 4.2. A physical write-blocker like the one shown here connected between the hard drives prevents any modification of the source hard drive

The best option for making a forensic copy of a hard drive is to remove the hard drive from the computer, connect it to a physical write-blocker, and then use a forensic workstation and forensic software to make the copy. However, in some cases it is not practical to remove the hard drive. The computer may be of a type that makes the hard drive removal very difficult, such as some types of laptop. When this is the case, making a copy of the hard drive using a software write-blocking technique is the correct method.

To use a software-based write-blocking method, the computer must be started up in a forensically sound manner.

When a computer is first turned on, it goes through a set of steps, beginning with a Power On Self-Test (POST), followed by loading of the Basic Input Output System (BIOS). The BIOS is software that is stored on the main board of the computer that tells the computer what types of hard drives are present; initializes the keyboard and other input and output ports, such as the USB ports; initializes the computer video card; and basically prepares the computer hardware to operate before it can load the operating system software. Settings in the BIOS tell the computer where to look for the operating system to start up, such as on a hard drive, from a floppy disk, a CD-ROM, or a USB device.

During normal operation, the computer will load the operating system installed on the hard drive, such as Microsoft Windows or the Mac OS. It is possible to prevent the computer from loading the operating system that is installed on the hard drive in favor of loading an operating system from a CD-ROM, floppy disk, or USB device.

When preparing to perform a forensic copy of a computer’s hard drive(s), a forensic examiner would force the computer to load a special forensic operating system from a specially prepared boot media. This can be done by changing the settings in the computer BIOS to tell the computer to look for an operating system on a CD-ROM, a USB device, or a floppy disk. This can also be done by pressing a function key when the computer is first turned on to bypass the default setting in the BIOS for the startup location for the operating system. For instance, pressing F9 on many computers will bring up a menu where the examiner can choose which device to use to load the operating system. This can also be done on a Mac by pressing and holding the C key while powering on the computer.

This boot media can be a floppy disk, CD-ROM, or USB device that is specially prepared to load a forensically sound operating system. This is critical because when a computer starts up (boots) normally from the installed operating system, whether Windows or Mac OS or Linux, these operating systems automatically “mount” the hard drive(s) in read/write mode. This allows the user to read and write files, such as documents, to and from the hard drive.

Special boot media is media that contains an operating system that can start a computer up, but does not allow writing to the original hard drive. These forensic operating systems are modified to effectively turn off the ability of the computer to make any changes to the hard drive(s).

Once the computer is started up, either with a hardware write-blocker in place or by using a forensic operating system, the forensic examiner would make a forensic copy of the hard drive(s) installed in the computer.

Making a forensic copy of a hard drive means getting a “bitstream” copy, which is is an exact duplicate of the entire hard drive recording surface.

Nonforensic Method: Personnel not trained in the proper forensic methods for duplicating electronic evidence may start a computer up and then make copies of the data on the hard drive. When a computer is started up in this manner, the operating system can write to the hard drive and change file dates, change log files, and other types of files, effectively modifying and destroying critical evidence. Figure 4.3 shows two hard drives connected without any protection in place for the original evidence hard drive, putting the evidence at risk.

Both forensics and e-discovery are secondary processes from a business perspective.

Figure 4.3. An unprotected hard drive will always be modified when a computer starts up

Nonforensic methods usually include just simply copying files from a hard drive to another storage device or using a backup program like Norton Ghost. While Norton Ghost has the ability to make a forensically complete (bitstream) copy, it is not generally accepted as forensically sound because Ghost copies are difficult to verify using hash values. (Hash values for verification are covered in detail in Chapter 26.) The reason for this is that Norton Ghost does not have a method for creating a hash value of the evidence being copied during the copy process. Additionally, a nonforensic copy of a hard drive will get only the data stored on the hard drive, such as documents, spreadsheets, and Internet history. A nonforensic copy will not get deleted files or areas of the hard drive where evidence can still reside that is not visible to the computer user.

Verification: This is the final step in the forensic copy process. In order for evidence to be admissible, there must be a method to verify that the evidence presented is exactly the same as the original collected. Verification is accomplished by using a mathematical algorithm that calculates a number based on the contents of the evidence. Figure 4.4 illustrates the drive and file hashing process used to calculate the verification hash.

Both forensics and e-discovery are secondary processes from a business perspective.

Figure 4.4. A hash value is calculated and stored for each data item copied and for the entire source hard drive

This is called creating a “hash value” and is performed by using either the Message Digest 5 (MD-5) algorithm or a Secure Hash Algorithm (SHA). The MD-5 is the most commonly used method for verification in computer forensics at this time. Forensic duplication tools automatically create a “verification” hash for the original and the copy during the duplication process. If these hash values do not match, there is an opening for a challenge to the authenticity of the evidence as compared to the original.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496438000043

Collecting evidence

John Sammons, in The Basics of Digital Forensics (Second Edition), 2015

Uses of hashing

Hash values can be used throughout the digital forensic process. They can be used after the cloning process to verify that the clone is indeed an exact duplicate. They can also be used as an integrity check at any point that one is needed. Examiners often have to exchange forensic images with the examiner on the opposing side. A hash value is sent along with the image so it can be compared with the original. This comparison verifies that the image is a bit-for-bit copy of the original. In addition, hash values can be used to identify specific files.

The relevant hash values that were generated and recorded throughout the case should be kept and included with the final report. These digital fingerprints are crucial to demonstrating the integrity of the evidence and ultimately getting that evidence before the jury.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016350000048

The key to forensic success: examination planning is a key determinant of efficient and effective digital forensics

Mark Pollitt, in Digital Forensics, 2016

The four phases of digital forensics

There are dozens of models that describe the digital forensic process. In this chapter, I will use a relatively simple one that was described in 2006 by National Institute of Standards and Technology in SP 800-86, which describes the process in four stages, defined as follows.

Collection. Data are identified, labeled, recorded, and acquired from all of the possible sources of relevant data, using procedures that preserve the integrity of the data. Data should be collected in a timely manner to avoid the loss of dynamic data, such as a list of current network connections, and the data collected in cell phones, PDAs, and other battery-powered devices.

Examination. The data that are collected should be examined using a combination of automated and manual methods to assess and extract data of particular interest for the specific situation, while preserving the integrity of the data.

Analysis. The results of the examination should be analyzed, using well-documented methods and techniques, to derive useful information that addresses the questions that were the impetus for the collection and examination.

Reporting. The results of the analysis should be reported. Items to be reported may include the following: a description of the actions employed; an explanation of how tools and procedures were selected; a determination of any other actions that should be performed, such as forensic examination of additional data sources, securing identified vulnerabilities, and improving existing security controls; and recommendations for improvements to policies, guidelines, procedures, tools, and other aspects of the forensic process (Fig. 2.1).

Both forensics and e-discovery are secondary processes from a business perspective.

Figure 2.1.

For the purposes of this chapter, we will focus on the second and third phases – examination and analysis. As the definitions make clear, the examination phase is the critical bridge between the collection of the evidence and analysis which makes use of the evidence in the legal context.

Using these definitions, we have two goals. The first is to preserve the integrity of the evidence. While this may seem obvious, it puts significant constraints on our examination. We must only use tools and techniques that do not alter the original evidence. It also forces us to preserve the context of the data such as its location within a file system and its metadata. The second goal is really the point of a digital forensic examination, that is, to find and make available information of value to the submitter of the evidence. This then begs the question of knowing what is “of particular interest?” This is the first of many questions we should answer before we begin “flipping bits.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045268000022

An Introduction to Computer Forensics

Dr.Gerald L. Kovacich, Dr.Andy Jones, in High-Technology Crime Investigator's Handbook (Second Edition), 2006

THE STAGES THAT MAKE UP THE FORENSIC PROCESS

As indicated in the previous definition, the computer forensic process can be broken down into a number of distinct stages. These are described in more detail here:

Evidence collection—The collection of any digital information that may be used as evidence must be carried out by trained staff and must follow recognized and accepted procedures so that its value as evidence is preserved for use in any legal or disciplinary proceedings. (Refer to Chapter 7 for a discussion of equipment and data that may be relevant.)

Preservation of evidence—This factor is fundamental in all computer forensics activities. If potential evidence is not preserved in a forensically sound manner, then it may have little or no value in any criminal or civil proceedings, although it may still be used as intelligence to inform the investigation. The preservation of evidence must be conducted by staff members who are trained and skilled in the required techniques and use of the appropriate tools to preserve the evidence in an unaltered condition. Procedures that have been developed and tested and are known to be accepted by the courts should be followed whenever possible. The preservation of evidence must be considered at all stages of the investigation.

Examination of evidence—The examination of evidence must be conducted by staff members who are trained and experienced, and who use tools that have been tested or accepted by the courts as providing information in a true form. Any data produced for use as evidence must be capable of being reproduced by another investigator. The examination of the devices for evidence must be conducted in an in-depth and comprehensive manner. The examination of the evidence should, whenever possible, be carried out on an image of the original material rather than the original material itself, although it is accepted that, in exceptional circumstances, this may not be possible.

Analysis of evidence—The analysis of evidence is the forensic phase during which the information that has been preserved and examined is interpreted to draw conclusions and to determine the truth of what has occurred in the period leading up to and during an incident. This will normally take place in the computer forensic laboratory and consideration should be given to ensuring that any results are documented and can be recreated by another investigator.

Presentation of findings—The presentation of the findings of the analyzed data is as important as any other phase of the forensic process. If the findings are not presented in a coherent, comprehensive, and believable form, then the effort that has been taken during the preceding phases will have been wasted.

Evidence collection → Preservation of evidence → Examination of evidence → Analysis of evidence → Presentation of findings

Always remember that evidence that is being presented must be:

Admissible—It must conform to the relevant legal rules within the jurisdiction.

Authentic—The evidence that is presented must be traceable back to the incident.

Complete—The evidence must cover all aspects of the incident, not just those that address one perspective.

Reliable—It must be provable that all aspects relating to the evidence have followed appropriate and relevant guidelines and procedures, and that the evidence being presented is authentic.

Believable—The people to whom the evidence is being presented must find the evidence both understandable and believable.

If the evidence does not have all of these characteristics, it will fail to achieve its purpose.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750679299500543

Digital Forensics and Analyzing Data

Dale Liu, in Cisco Router and Switch Forensics, 2009

Hardware Documentation Difficulties

Documenting hardware configuration is a tedious but essential part of the forensic process. The magnitude of documentation is in direct correlation to the number and types of devices being acquired. What we as examiners cannot afford to forget are the various aspects to documenting hardware.

Within the documentation process itself, you need to document all the system configurations, including the installed hardware and BIOS settings such as the boot device. Another essential aspect of hardware documentation is the time settings of the system and the system clock of each device. You must document the system time and compare it to the actual time. The time zone setting may also be crucial when creating timelines or performing other analyses. You should note the presence of a Network Time Protocol (NTP) time server. Remember, a system on a Windows domain will sync its time with the domain controller, but the time by default can be off by 20 seconds and still function properly.

Traditional forensics dictates that you document all identifying labels and numbers. Often, an examiner will take pictures of all sides of the system as well as labels on the system as part of the documentation process. This can also be extremely difficult with large systems. It could take a day to unrack and photograph all the systems in a rack. Depending on the approach you take to acquire data from a system, you may need to conduct complete and detailed hardware documentation after acquiring the system. If the system is live, it most likely will not be desirable to shut it down to document it and then to restart it to perform the acquisition. If possible, take no more than a day to analyze a blade server enclosure and the servers in a data center. Consider how to document each blade as you would a typical PC. Then think about the fact that a typical rack can often hold six enclosures holding 16 blade servers. The IT staff at the client company may have decent documentation for you to work from; if you can verify from their existing documentation instead of working from scratch, you can save a lot of time.

A large storage system is probably another example of an instance where you will need to document the devices after you acquire them unless you use the physical option. This is because it may not be practical to image each drive individually. Once the storage system's logical image is complete, you can remove the drives from the enclosure and document them. The documentation of rack after rack of hard drives can be even more daunting than blade servers.

You also should document the network topology and any systems that directly interface with the system, such as through NFS or SMB mounts. If the investigation expands, it may be necessary to increase the documentation of the surrounding network to encompass the switches, routers, and any other network equipment. In the case of an intrusion, any of these paths could be the source of the compromise.

A final item to document is the console location, if one exists. Even today, not all unauthorized access happens through a network connection.

Complete and clear documentation is the key to a successful investigation. If the incident leads to litigation, the report created from the documentation will be a valuable reference for the examiner. Complete documentation will help to remove any doubt cast by the defense or other party in a civil matter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597494182000016

When performing forensics on a computer system you should use the utilities provided by that system?

When performing forensics on a computer system you should use the utilities provided by that system. Only one person is needed to collect and document evidence obtained in performing forensics on a computer system. When analyzing computer storage components, the original system should be analyzed.

What type of evidence is used to aid a jury and may be in the form of a model experiment chart and so on to indicate that an event occurred?

Demonstrative evidence is used to aid a jury and may be in the form of a model, experiment, or chart, to indicate that an event occurred.

Which term refers to a structured approach to determining the gap between desired privacy performance and actual privacy performance?

A structured approach to determining the gap between desired privacy performance and actual privacy performance is called. - Personal impact assessment.