CADO LIVE DOCUMENTATION

Cado Live is a free tool for securely copying systems into cloud storage for forensic analysis from a bootable USB disk.

Why Image Systems to the Cloud?


When done right, cloud storage is cheap, secure, and can be local to your region. It also enables you to spin up systems to quickly analyse data in cloud storage.

Cado Live enables a number of scenarios such as:

  • An examiner can go to a target machine with just a USB and securely create an image of it that can be examined later in their remote office

  • An examiner can instruct a customer on the other side of the world on how to create a USB and quickly deliver a forensic image of a machine back to them

Which Cloud Storage Providers are supported?

Cado Live can upload data to:

  • Amazon Web Services (AWS) S3

  • Azure Storage Blobs

  • Google Cloud Storage

  • Local Storage (such as a plugged in USB Drive)

What Target Systems are supported?

Any system that can boot Ubuntu Linux 20.04 should be supported.

We have tested Cado Live on various pieces of new and old hardware, including various Linux, Windows and Apple based products, and it should work for the most devices. See the FAQ below for more details on issues, and possible solutions you may encounter for some devices.

Whilst in Beta, we would like to create a supported hardware list. If you have something to add, or if you’re hardware isn’t supported, let us know and we can take it into consideration for future releases.

Creating a Bootable USB Disk

First you will need to create a bootable USB disk from our image:

We recommend creating the Cado Live image using a tool Rufus, or UNetBootIn for OSX and Linux.

The default selections in Rufus should produce a USB that you can boot from.

Under Device, select your USB disk.

Click Select and select the ISO file for Cado Live.

Click Start to create the bootable USB Disk.

You now have a USB disk that is ready to image. You can use this yourself, or safely deliver to someone else to image systems for you to access.

Note: Rufus will wipe anything that is on your USB drive, make sure you have the right drive selected.

Creating Secure Credentials

It is important to use credentials with access limited to only write objects to your cloud storage.

Otherwise, if an attacker finds your credentials they could compromise data.

Before using Cado Live, you will need to create secure credentials to upload with:

Tip: to save you typing out the credentials into Cado Live, we recommend you save them somewhere securely. So you can access them via the browser in Cado Live itself, or via secure USB storage, and copy and paste.

Booting from the USB

Now insert the USB into the target machine, and reboot into the USB.

If the machine doesn’t load Cado Live from the USB, you may need to change your boot order or disable secure boot. Note: don’t forget to change your settings back once your done.

Getting Started

You can start the GUI by clicking the “Cado Live Icon” on the desktop side panel.

You can start the command line interface by running “cado_cli” in a terminal window.

Should you require it, the administrator (“sudo”) account is called cado and the password is odac

Creating an Image – GUI

 

When you first open the UI, you will be presented with a screen like this:

Click Enter Settings to open the settings page.

Select the Disk you would like to image:

Enter the credentials for the Cloud Storage (or local storage) you would like to use:

Now enter the Acquisition Settings you would like to use.

The Compress option will gzip compress your disk image (supported on Azure, AWS and local, not GCP).

Enabling Detailed Logging will produce more verbose logs, viewable on the tasks page.

The Generate Hash option will let you select either SHA256 or MD5 hash of the target drive prior to acquisition.

If you do not set the Output Filename an autogenerated filename (cado_$RandomLetter_$Timestamp) will be used.

Optionally, you can choose to enter Case Notes for the acquisition.

 

This will be stored in logs sent to cloud storage:

Click Save Settings, and the home page will now say you are ready to image:

Click Start Acquiring and the system will start to image to the cloud.

You should see a pre-acquisition log uploaded within seconds to cloud storage, then the disk image, then a post-acquisition log.

You can also view progress under the Tasks page:

The Imaging Progress progress bar reports how far through the drive Cado Live is. When it updates to “Acquisition Completed” the disk has been completely acquired.

 

Note: There may be a slight delay between the data imaged, and data sent, so please wait for it to say “Acquisition Completed”.

You can click View Log for more details.

 

Creating an Image – CLI

You can also use the command line interface (CLI) to create an image.

Cado Live will attempt to read from a pre-completed settings file (config.cfg) if available in the /opt/cado/ directory.

An example of a completed config.cfg file is below:

[CORE]
storage = aws
compress = yes
output_filename = speed_test
debug = no
hash_file = no
disk = /dev/sdc

[CASE]
case_identifier = Case01
evidence_identifier = Evidence01
description = An acquired disk
examiner = Chris Doman
case_notes = This case rules

[AWS]
access_key = EXAMPLE
secret_key = EXAMPLE
bucket = EXAMPLE

[GOOGLE]
gcp_access_key = EXAMPLE
gcp_secret_key = EXAMPLE
gcp_bucket = EXAMPLE

[LOCAL]
destination_folder = /tmp/

[AZURE]
access_signature=?st=EXAMPLE
account_name = EXAMPLE
container_name = EXAMPLE

If you haven’t created a config.cfg settings file in /opt/cado/, you will be prompted to enter settings to start the upload:

             _         _                                 
            | |       (_)                                
___ __ _  __| | ___    _ _ __ ___   __ _  __ _  ___ _ __ 
/ __/ _` |/ _` |/ _ \  | | '_ ` _ \ / _` |/ _` |/ _ \ '__|
| (_| (_| | (_| | (_) | | | | | | | | (_| | (_| |  __/ |   
\___\__,_|\__,_|\___/  |_|_| |_| |_|\__,_|\__, |\___|_|   
                                            __/ |          
                                            |___/         
  
No config.cfg detected, please enter information for destination
No storage settings in configuration file.

Upload data to:
1 - AWS
2 - Azure
3 - Google
4 - Local
Storage choice (1/2/3/4) : 1

AWS S3 Bucket Name: EXAMPLE
Access Key: EXAMPLE
Secret Key: EXAMPLE

Compress output file? Y/N: Y

1 - /dev/sdb - 0.0156 GB
2 - /dev/sdc - 0.625 GB
3 - /dev/sda - 20.0 GB
All
Which hard drive would you like to image? : 1

Acquiring /dev/sdb
Please type filename to output to. If you dont provide a filename, we will autogenerate one for you:
 
Uploading to AWS S3

Cado Cado Live - Start Log
Time: 2020-04-27 10:23:36.786157 UTC
Acquiring From: /dev/sdb (20.0 GB)
Acquiring To: s3://EXAMPLE/cado_w_1587983016.gz
Log Written To: s3://EXAMPLE/cado_w_1587983016.gz.startlog.txt
SHA256 Hash of Original File: Not Calculated
      
Uploading Disk Image...
81+1 records in
81+1 records out
16730624 bytes (17 MB, 16 MiB) copied, 0.479672 s, 34.9 MB/s
Finished Uploading Disk Image...

Cado Cado Live - End Log
Time: 2020-04-27 10:23:39.968512 UTC
Acquired From: /dev/sdb (20.0 GB)
Acquired To: s3://EXAMPLE/cado_w_1587983016.gz
Log Written To: s3://EXAMPLE/cado_w_1587983016.gz.endlog.txt
SHA256 Hash of Original File: Not Calculated
            
Cado Live Completed

FAQ’s

 

What is Cado Live distro based on?

Cado Live is based on the Ubuntu 20.04 operating system.

Are there any other tools on the distribution?

You have all the usual Linux base tools installed, as well as VeraCrypt, should you need encryption for local drive imaging.

If you are connected to the internet you can install additional packages or drivers using the Ubuntu package manager, or command line (i.e. sudo apt-get install packagename).

How should I retrieve the disk image once it has uploaded?

You may want to keep any files in cloud storage and process them within the cloud environment. This allows you to quickly spin up powerful machines for analysis.

 

Alternatively, if you want to download the files locally for analysis, you can either use the native tools of the provider, or we have found the free tool CyberDuck works well in most cases.

For GCP we recommend using the gsutil tool, some useful commands to get you started once installed, authenticate and pick your project.

  • gsutil ls (will list your buckets)

  • gsutil cp -r gs://my-bucket/remoteDirectory LocalDirectory (will copy everything in that bucket to your local directory)

Why does the imaging appear to go quickly, then slow down?

We use buffers to increase the overall upload speed, which normally means it looks like the first part of a disk has uploaded quicker than the later parts.

Why are the files stored in Google Cloud Storage split into 1GB components?

Due to issues we encountered with imaging large files to Google Cloud, we currently split and disk images into 1GB components as we upload them.

 

You can recombine them with:

  • cat disk_name* > combined.bin (On Linux)

  • type disk_name* > combined.bin (On Windows)

We plan on introducing a better way to do GCP at a later time.

 

How fast will it image, or how can I make it better?

Hashing speeds will mainly be dependent on how powerfull your CPU is.

Imaging speeds to the cloud will also be dependent on how fast the bandwidth is of the machine you are acquiring. For example a 500Mbit/s internet connection, will capture a 500Gigabyte hard drive (minus hashing/overheads) in roughly 2hrs and 15mins. If you had a slow connection of 50Mbits/s it would take ten times as long, i.e. 22hrs.

 

A few suggestions:

  • Do an internet speed test first, so you know roughly how long it will take.

  • Creating a storage bucket in the same region as your target machine will speed things up and will help maximize the bandwidth and reliability.

  • We recommend connections of at least 100Mbit/s and above for large drives, 100Mbit/s would equate to 12.5 Megabytes a second. However if you feel like waiting a while then technically any connection will work, if its reliable.

  • Avoid using WiFi unless you feel it is a solid connection, use a physical Ethernet cable to help increase reliability.

  • You always have the option to capture a system to a local encrypted external hard drive, should you find the bandwidth at the last minute not up to scratch.

Will there be another release?

When we have time, will look to release a version based on initial feedback. As well as take the opportunity to fix priority bugs, or add things from feature requests. Keep an eye out on our Twitter, or LinkedIn for announcements.

 

Troubleshooting

  • Is my system supported? – The vast majority of systems should be supported. The likely exceptions you may come across may be,

    • Apple products, some work, some partially (i.e. 2015 works well, 2013 the WiFi driver does not work), we will try and create a list as people provide feedback.

    • Some devices have proprietary drivers which are not included in the default Ubuntu operating systems. If you were going to face an issue, the chances are it would be with the high end latest and greatest ultrabooks. These may have chipsets you would manually need to find or install drivers for, i.e. for the WiFi or NVME SSD on some 10th gen Intel ultrabooks. Either way, its worth testing these ahead of time if you can, or have a plan B, and a USB Ethernet dongle.

  • Networking – Make sure you have reliable access to the internet if you are sending the image to cloud storage. Note: while most network interfaces are supported there is a chance that some WiFi adapters may not be. If you are imaging a laptop with no Ethernet device built in, we suggest you take a USB to Ethernet adapter with you, or similar.

  • Kill Processes – Note that closing the terminal behind the GUI (Browser) may not always kill the imaging process, if you suspect that you may need to kill background processes to start fresh you can do one of two options.

    • Reboot the live distribution and start again, note: as it is not persistent anything you had opened or changed will be lost. Make sure you save anything you need to alternate external media.

    • Or, run the following commands to make sure you kill any process currently running.

      • sudo pkill start_gui

      • sudo pkill start_cli

      • sudo pkill dd

      • sudo pkill s3cmd

      • sudo pkill azcopy

    • This will obviously disrupt any current imaging.

  • Remove the old config – The config.cfg stores in the /opt/cado/ directory, if for some reason you need to remove the file and start fresh (other then just a reboot), then you can delete it using the rm command.

 

Documentation Last Updated: 2020-05-29

© 2020 Cado Security

71-75 Shelton Street

London

WC2H 9JQ

  • White Twitter Icon
  • LinkedIn
  • Amazon