Child pages
  • HCP Users FAQ
Skip to end of metadata
Go to start of metadata

This page contains answers to frequently asked questions from users of HCP data and HCP tools. If you don't find your question and answer here, look through the full HCP-Users list archive, join the HCP-Users email list and post questions to 

1. What are *dtseries.nii, *.dconn.nii, *dlabel.nii, *.dscalar.nii files?

These are Cifti-2 format files (see for specs) that are output from the HCP Pipelines (pipeline code released on GitHub). The extension for CIFTI files is .[type].nii, where type is dconn, dtseries, dscalar, dlabel, pconn, or another such specifier. It ends in .nii because CIFTI uses the NIFTI-2 format, which specifies that the extension must be .nii.  The files include a specialized XML header extension to describe the indices, leaving most of the NIFTI-2 header blank/unused (spacing info, etc). Note that CIFTI files are not (and must not be) gzip'ed, as on-disk random access is used for access due to large file sizes.  However, we usually gzip our volume files, so the volume files we distribute will generally end in .nii.gz

The CIFTI standard extensions are listed in the specification, see Appendix A in for the list, but refer to the main specification in for how "unknown" works (it still must end in .nii).


CIFTI has the advantage of being able to handle data on surface vertices and subcortical voxels in one file. Collectively, these are termed "grayordinates". Some CIFTI files distributed by HCP (e.g., the rfMRI *dtseries.nii files) do contain data from both the surface and subcortical voxels.  The voxels in these files are in MNI space via FNIRT.  CIFTI files do not contain the surface coordinates – you must get those from the relevant surface file (.surf.gii in the HCP data). The surfaces used have roughly equivalent spacing to the voxels used, so that the vertices and voxels have a similar "scale". However, two thirds of the data is not in voxels, but in surface vertices.

*dtseries.nii contain timeseries fMRI data for all CIFTI grayordinates.

*.dconn.nii contain dense connectivity matrix data (correlations between all CIFTI grayordinates).

*.dlabel.nii contain label designations and a label color key for every grayordinate.

*.dscalar.nii contain scalar grayordinate data.

Parcellated CIFTI files (e.g. *.pconn.nii, *.ptseries.nii) also exist that contain data on parcel groups of grayordinates, rather than on individual grayordinates. 

Connectome Workbench ( is the tool we use with CIFTI files.  You can visualize the data with the wb_view GUI, or perform various command-line operations (including extracting the data into other formats) with wb_command.

2. How do you get CIFTI files into MATLAB?

HCP has developed 2 ways to get CIFTI files into MATLAB:

FOR MEG USERS (This tool pads the matrix with NaNs):

A. PLEASE DON'T USE THIS OPTION WITH HCP MRI DATA AT THIS TIME. Using the code developed for the HCP megconnectome data analysis pipelines that are implemented using the FieldTrip toolbox. The code is available in stand-alone format from  This approach loads the complete CIFTI XML header information into a MATLAB structure, including information on what each CIFTI index represents. Where possible it will also represent the anatomical models. Furthermore, it allows you to write data from MATLAB to CIFTI format. This code is relatively new and not yet fully tested. If you test this code, we would appreciate your user feedback. 

FOR MRI USERS (This tool does not introduce NaNs):

B. Using Workbench v1.0 + GIFTI toolbox code (descriptions below). This method has the limitation of not telling you what the CIFTI indices represent (what vertex or voxel, etc) and it creates intermediate files (that are deleted as part of the provided code). For this approach you need a couple of prerequisites:

1) Workbench v1.0 (available here) needs to be installed on the system.

2) A matlab set of functions called the GIFTI toolbox by Guillaume Flandin (


Here are MATLAB functions to download and use for importing CIFTI files (e.g., *.dtseries.nii) using these prerequisite tools:




cii = ciftiopen('path/to/file','path/to/wb_command'); CIFTIdata = cii.cdata;

Some lines of analysis code:

newcii = cii; newcii.cdata = AnalysisOutput; ciftisave(newcii,'path/to/newfile','path/to/wb_command');

Or, if the data matrix has a different number of maps/columns from what you started with:


3. I see that HCP distributes group average dense connectome files. Do you also provide connectivity matrices for individual subjects?

We don't include individual dense connectomes in the HCP releases, because these files are very large (~33GB each). You can generate them by running the following on a subject's rfMRI dtseries file:

wb_command -cifti-correlation

Because the HCP rfMRI data was collected in 4 runs, you may want to 1) demean and normalize the individual timeseries, then 2) concatenate them, before you do the correlation step.

To do that, use these commands in wb_command:

wb_command -cifti-reduce <input> MEAN mean.dtseries.nii
wb_command -cifti-reduce <input> STDEV stdev.dtseries.nii
wb_command -cifti-math '(x - mean) / stdev' <output> -fixnan 0 -var x <input> -var mean mean.dtseries.nii -select 1 1 -repeat -var stdev stdev.dtseries.nii -select 1 1 -repeat
wb_command -cifti-merge out.dtseries.nii -cifti first.dtseries.nii -cifti second.dtseries.nii -cifti third.dtseries.nii -cifti fourth.dtseries.nii


You could do this for each subject (and then z-transform and average the connectomes across subjects). 

You could also use -cifti-merge to concatenate all runs for all subjects after the demeaning step and follow with -cifti-correlation to make a group averaged dense connectome file. 

We are working on creating a better way to do these things in the future.

4. Where do I find more information about running the HCP Pipelines?

Check out the HCP Pipeline FAQ  and documentation on the GitHub HCP Pipelines distribution site.

5. Where do I download the experimental stimuli and presentation protocol software scripts for the HCP task runs?

You can download all of the HCP E-Prime scripts at:

Use your ConnectomeDB account for login. The HCP E-Prime scripts are available under licensing terms in the README document included in the download. You will need to have E-Prime 2.0 Professional and a dual monitor setup to use the scripts as they are. You may need to edit the scripts to suit your own purposes.

6. Where do I find information and definitions of abbreviations for the Behavioral and Demographic data on subjects? 

Descriptions of the column headers for both Open access and Restricted HCP behavioral and demographic/individual difference measures are located in the HCP Data Dictionary wiki and in the Data Dictionary within ConnectomeDB (left click on the column name of any data column and select "Data Dictionary"). 

7. Using gradient unwarping fails, unable to find the file "fullWarp_abs", what do I do?

If you get an error like this:

Image Exception : #22 :: ERROR: Could not open image .../T1w/T1w1_GradientDistortionUnwarp/fullWarp_abs
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
.../scripts/ line 92: 11237 Aborted                 (core dumped) ${FSLDIR}/bin/convertwarp --abs --ref=$WD/trilinear.nii.gz --warp1=$WD/fullWarp_abs.nii.gz --relout --out=$OutputTransform

Then what you need is the HCP-modified version of the gradient unwarp code, which is here:

8. There are HCP subjects that are listed as twins with the same parents (Co-Twins), but they have different exact ages or age ranges. Are these subjects not twins? 

In the 500 Subjects Release there are 4 sets of twins who do not have the same exact ages in ConnectomeDB. Exact age is restricted data, but it is also used in binning subjects into age ranges, which is open access. The differences in the exact age of these 4 twin pairs is due to the fact that some twin pairs are interviewed and scanned months apart due to their individual schedules. In these 4 pairs, one twin happened to be interviewed in the months before their birthday and the other in the months after their birthday. Because twin status and co-twin IDs are restricted data, we can not be more specific here than to say that 4 twin pairs are affected.

For all HCP participants, we collect the "exact age" information at the time of the SSAGA phone interview that precedes the scanning visit. It should be noted that sometimes months can elapse between the SSAGA and the scan visit, however "exact age" is always the participant's age at the time of the SSAGA interview, not the scan date. Users should be aware of this as they conduct their analyses.

9. How do I map data between FreeSurfer and HCP?

UPDATE May 17, 2017: A new, more accurate atlas-to-atlas registration was computed between fs_LR and fsaverage.  If you have resampled data between fs_LR and fsaverage prior to this date, please read Appendix 3 of the new instruction document for details.

Comparisons between HCP-derived data (including the new HCP_MMP1.0 cortical parcellation – Glasser et al., Nature, 2016) and data analyzed in FreeSurfer entail mapping between different surface ‘spaces’: HCP data are generally on a standard fs_LR mesh (left and right hemispheres aligned), whereas FreeSurfer data are on a native mesh or on the fsaverage mesh (in both cases, no correspondence between hemispheres). Mapping data from one surface mesh to another involves one-step “resample” options within “wb_command”, plus preparatory steps that may also be needed.

We have written a document (Resampling-FreeSurfer-HCP_5_8.pdf, download here) that details instructions for performing mappings from:

A) fsaverage group data to fs_LR

B) FreeSurfer native individual data to fs_LR

C) fs_LR group data to fsaverage

D) fs_LR individual data to fsaverage; and .

We recommend options (A) or (B) so as to benefit from the correspondences between left and right hemispheres provided by the fs_LR atlas. Option (B) presumes that FreeSurfer was run using mris_register, yielding a “?h.sphere.reg” native mesh sphere registered to fsaverage.


  • No labels