|Deletions are marked like this.||Additions are marked like this.|
|Line 73:||Line 73:|
|== Instructions for data on archives below 308 ==||== Instructions for data on archives below 313 ==|
|Line 75:||Line 75:|
|If you run findsession and it returns an archive path below 308 (eg, /cluster/archive/301), then you will need to The data is only on Cloud Bucket backups. You can retrive that data yourself by following these steps:||If you run findsession and it returns an archive path below 313 (eg, /cluster/archive/301), then you will need to The data is only on Cloud Bucket backups. You can retrive that data yourself by following these steps:|
|Line 103:||Line 103:|
Unpack the data from bourget:
Find your subject...
To find your data, run the findsession command, passing it the "patient name", which is the name that you registered the subject under when scanning:
This will give you a list of all matching data sets, including paths to where each one can be found. E.g.,
All the runs/series for your subject's visit will be in this directory. You can do more elaborate searches with findsession. To get more info, run findsession with -help.
Note: if the path is in an archive numbered less than 308 (eg, on /cluster/archive/277/...), there are special instructions. See below.
Unpack the data
Unpacking is the process through which you convert the data from DICOM format (which is not very useful) to a format that can be used by an analysis program (e.g., bhdr, nifti, analyze, mgh, mgz). The program that does this is dcmunpack (this is only used to unpack Siemens DICOM files). mri_convert can also be used to convert individual runs, whereas dcmunpack can be used to unpack any or all data from a directory. Below we go through some simple unpacking. dcmunpack can do a lot more. Run it with -help to get full documentation.
The first step in unpacking is to decide where you want your data to be stored. Usually, you choose a "parent" directory under which all of the visits for a study will be stored. E.g., /space/data/1/users/you/data-parent. The actual target data for a given visit from a subject would be stored under something like /space/data/1/users/you/data-parent/yoursubject.
Next, you need to find out exactly what data are in the dicom directory. Specificially you need to know which run/series number corresponds to which acquisition. To do this run:
dcmunpack -src dicomdir -targ targetdir -scanonly targetdir/scan.info -index-out dcm.index.dat
where dicomdir is the directory where the dicom data reside (as found above by findsession), targetdir is where you want the individual's data to go. The dcm.index.dat file will contain a list of files; this not necessary, but it can be used as below to make the unpacking go faster. and the scan.info file will contain a list of run/series numbers along with the scanning protocol that was used to acquire the data, and will look something like this:
1 circle_localizer ok 256 256 3 1 74407231 2 ge_functionals ok 64 64 10 146 74407258 3 ge_functionals ok 64 64 10 146 74406408 4 ge_functionals ok 64 64 10 146 74405551 5 ge_functionals ok 64 64 10 146 74401663 6 ge_functionals ok 64 64 25 3 74400790 7 ep2d_T1w ok 64 64 10 1 74400835 8 tfl3d1_ns T1_MPRAGE_sag ok 256 256 128 1 74401012 9 tfl3d1_ns T1_MPRAGE_sag ok 256 256 128 1 74402212
To actually do the unpacking, you will need to run dcmunpack again with a slightly different set of arguments:
dcmunpack -src dicomdir -index-in dcm.index.dat -targ targetdir -fsfast \ -run 2 bold nii f.nii \ -run 3 bold nii f.nii \ -run 4 bold nii f.nii \ -run 5 bold nii f.nii \ -run 6 bold nii f.nii \ -run 8 3danat mgz 001.mgz \ -run 9 3danat mgz 001.mgz
The -fsfast argument tells dcmunpack to use the FS-FAST directory structure (if you want to specify a structure explicitly, use -generic). Each -run flag is followed by 4 arguments: (1) the run number (as found in the scan.info file), (2) a directory, (3) format, and (4) file name. In fsfast directory structure, run 2 will be stored in targetdir/bold/002 in nifti (nii) format. nifti format will have a file name "f.nii" (i.e., the file name given as the 4th argument). Each of the fMRI runs (2-6) will be stored under the "bold" directory (this is also known as the "functional subdirectory" or fsd). Subsequent fsfast commands will expect this directory structure. Also, there will be a seq.info file in the bold directory with information about the acquisition. If you collected several types of fMRI scans (eg, different slice prescriptsion, TR, TE, resolution, etc), then you should put each type in a separate fsd (e.g., -run 7 bold-hires nii f).
Note that run 7 was not unpacked. Note also that the backslashes ("\") are unix command-line continuation characters and not part of the dcmunpack arguments list. Runs 8 and 9 are anatomicals which can be used in FreeSurfer processing; they will be stored in targetdir/3danat/008/001.mgz and 009/001.mgz (this is compressed mgh format). You can launch FreeSurfer on these volumes with:
recon-all -s yoursubjectname -i targetdir/3danat/008/001.mgz -i targetdir/3danat/009/001.mgz -all
where "yoursubjectname" is the FreeSurfer name for the subject as found in $SUBJECTS_DIR. Alternatively, you can create yoursubjname/mri/orig and then copy or link the mgz files into that directory as 001.mgz and 002.mgz and then launch recon-all. If you choose to do this, you do not have to pass -i in your command. This will be sufficient:
recon-all -s yoursubjectname -all
For information on Reconstruction Work Flow, click here
Instructions for data on archives below 313
If you run findsession and it returns an archive path below 313 (eg, /cluster/archive/301), then you will need to The data is only on Cloud Bucket backups. You can retrive that data yourself by following these steps:
1) ssh to either transfer or pinto
2) run /usr/pubsw/bin/findsession to find the PATH to where
- the data was (if you don't already know) such as /space/archive/227/siemens/Avanto-25096-20090713-111457-265000
- sudo /usr/etc/fetchsession PATH
- where PATH is the PATH from findsession of the data you want to recover
4) the recovery may take anywhere from 1-40 minutes depending
- on how much of the volumes catalog is already cached locallly and how big the data is
5) at the end of a successful run, the program will output the
- path to where it put the recovered data which will always be under /autofs/space/duplarchive/restore
6) recovered data will be removed from this temp volume after
- two weeks
Send help requests to firstname.lastname@example.org