Child pages
  • ConnectomeDB, pyxnat, and the OHBM Hackathon

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

 

 

One of the datasets available for the OHBM Hackathon is the Q1 public data release from the Human Connectome Project. In addition to the imaging data, which are mirrored on S3 for easy access from AWS, a great deal of imaging metadata and associated non-imaging data is accessible through ConnectomeDB, a web application built on the XNAT imaging informatics platform.

pyxnat is a library that provides a Python language API to XNAT's RESTful web services. In this tutorial, we'll use pyxnat to access behavioral measures stored in ConnectomeDB and to view and download the imaging data. Even if you're not a Pythonista, read on, as the underlying XNAT REST API can be accessed from just about any language. I have small examples of code using the REST API in bashJava and Clojure, and I'd probably find it amusing to cook up an example in your favorite language; send me mail if you'd like details.

Getting started

You'll need Python (version 2.7.x recommended) and pyxnat to follow along. Someday soon we'll have a hackathon-customized version of pyxnat to provide easier access to the S3-hosted data, but there's nothing AWS-specific about this introduction, so plain old pyxnat will be fine. I'm writing this using Python 2.7.1 on Mac OS X 10.7.5, but I regularly use pyxnat on Gentoo Linux; other people use pyxnat on other Linuxes and even Windows, and in principle this all should work just about anywhere you can run Python. Send me mail if you run into trouble.

Aside for Python experts: because I'm working on pyxnat and not just with it, I usually don't install pyxnat to the system Python; instead I set up a virtualenv and install to that. We'll probably have to do this in a later tutorial, as we start using not-yet-published pyxnat extensions for working with the S3-hosted data.

You'll also need to create an account on ConnectomeDB and agree to the HCP Open Access Data Use Terms.

We'll look at some behavioral measures in ConnectomeDB: the Non-Toolbox Data Measures, a variety of tests that aren't part of the NIH Toolbox. (NIH Toolbox scores are forthcoming but not available in the Q1 data release.) There's not yet any publicly accessible documentation for the Non-Toolbox Data Measures – I'll start agitating for that immediately after I post this -- but some details can be gleaned here: nontoolbox.xsd. This is an XML Schema document that specifies the non-Toolbox data type for ConnectomeDB. Despite many promises made about XML, I wouldn't call this human-readable, but it does include a list of all the non-Toolbox fields and a minimal description of all or most of them.

Let's start by firing up a Python session, loading pyxnat, and setting up a connection to ConnectomeDB.

bash-3.2$ python
Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyxnat
>>> cdb=pyxnat.Interface('https://db.humanconnectome.org','mylogin','mypasswd')
>>>

This Interface object creates a session on ConnectomeDB. If the session is idle for a while, ConnectomeDB may close the session. You can tell that the session has gone stale if, when you try to do a query:

>>> cdb.select.project('HCP_Q1').subject('100307').id()

you get a plateful of nonsense that looks like:

['status', 'content-location', 'content-language', ...
200
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
...

If this happens, just create a new Interface:

>>> cdb=pyxnat.Interface('https://db.humanconnectome.org','mylogin','mypasswd')

Any query result objects that you created from the stale Interface will also need to be refreshed. There's an example later in this tutorial.

Exploring the ConnectomeDB data hierarchy

ConnectomeDB's data is organized into projects, which are the main access control structure in XNAT. If you have access to a project, you can see that project's data. Let's see what projects we have access to:

>>> cdb.select.projects().get()
['HCP_Q1'] # maybe some others, depending on your access settings
>>>

cdb.select.projects() asks ConnectomeDB for project details and turns the result into a collection of project objects. The get() method returns the identifiers for each object in the collection. We could get the same result using a list comprehension; let's try that now, because that will be a more convenient form in general:

>>> [project.id() for project in cdb.select.projects()]
['HCP_Q1']
>>>

Since we're interested in the HCP Q1 data, let's get a handle on just that project:

>>> q1 = cdb.select.project('HCP_Q1')
>>>

Note that if the session goes stale, so will this object q1. So in addition to refreshing cdb, you'll probably need to refresh q1, too, by reissuing this command:

>>> q1 = cdb.select.project('HCP_Q1')
>>>

What's inside of this project object? Each project contains subjects and experiments. Let's look at the list of subjects:

>>> [subject.label() for subject in q1.subjects()]
['100307', '103515', '103818', '111312', '114924', '117122', '118932', '119833', '120212', '125525', '128632', '130013', '137128', '138231', '142828', '143325', '144226', '149337', '150423', '153429', '156637', '159239', '161731', '162329', '167743', '172332', '182739', '191437', '192439', '192540', '194140', '197550', '199150', '199251', '200614', '201111', '210617', '217429', '249947', '250427', '255639', '304020', '307127', '329440', '499566', '530635', '559053', '585862', '638049', '665254', '672756', '685058', '729557', '732243', '792564', '826353', '856766', '859671', '861456', '865363', '877168', '889579', '894673', '896778', '896879', '901139', '917255', '937160', '131621', '355542', '611231', '144428', '230926', '235128', '707244', '733548']
>>>

We used subject.label() instead of subject.id(), which inside the list comprehension would given the same result as q1.get(). Why label() instead of id()? The label is the human-readable name for the subject within a specified project (HCP_Q1); the first label in the list is 100307, which is the HCP-assigned name for that subject. The subject id is the XNAT site-wide unique identifer for that subject, a not-intended-for-human-consumption identifier; the id for subject 100307 is 'ConnectomeDB_S00230'. In principle, different projects might assign different labels to the same subject, or different subjects might share the same label in different projects. We aren't engaging in those sorts of shenanigans on ConnectomeDB, but we do inherit a little complexity from XNAT's flexibility.

What data are available for subject 100307? Let's ask:

>>> [expt.label() for expt in q1.subject('100307').experiments()]
['100307_3T', '100307_SubjMeta', '100307_NonToolbox']
>>>

There are three "experiments" here: 100307_3T contains the imaging data and associated metadata acquired on the HCP 3T Skyra; 100307_SubjMeta holds some bookkeeping about what data have been collected for this subject; and 100307_NonToolbox has the non-Toolbox scores. Again we use label() instead of id() (or get() on the experiments collection), because each project has a human-readable label for the experiment, whereas the id is the site-wide, XNAT-generated identifier.

The experiments are represented by XML documents; we can view the XML for 100307_NonToolbox to see what's inside:

>>> nt_100307 = q1.subject('100307').experiment('100307_NonToolbox')
>>> print(nt_100307.get())
<?xml version="1.0" encoding="UTF-8"?>
<nt:NTScores ID="ConnectomeDB_E00299" project="HCP_Subjects" label="100307_NonToolbox" xmlns:arc="http://nrg.wustl.edu/arc" xmlns:val="http://nrg.wustl.edu/val" xmlns:pipe="http://nrg.wustl.edu/pipe" xmlns:hcp="http://nrg.wustl.edu/hcp" xmlns:wrk="http://nrg.wustl.edu/workflow" xmlns:scr="http://nrg.wustl.edu/scr" xmlns:xdat="http://nrg.wustl.edu/security" xmlns:nt="http://nrg.wustl.edu/nt" xmlns:cat="http://nrg.wustl.edu/catalog" xmlns:prov="http://www.nbirn.net/prov" xmlns:xnat="http://nrg.wustl.edu/xnat" xmlns:xnat_a="http://nrg.wustl.edu/xnat_assessments" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://nrg.wustl.edu/workflow https://db.humanconnectome.org/schemas/pipeline/workflow.xsd http://nrg.wustl.edu/catalog https://db.humanconnectome.org/schemas/catalog/catalog.xsd http://nrg.wustl.edu/pipe https://db.humanconnectome.org/schemas/pipeline/repository.xsd http://nrg.wustl.edu/hcp https://db.humanconnectome.org/schemas/HCPMetadata/metadata.xsd http://nrg.wustl.edu/nt https://db.humanconnectome.org/schemas/nontoolbox/nontoolbox.xsd http://nrg.wustl.edu/scr https://db.humanconnectome.org/schemas/screening/screeningAssessment.xsd http://nrg.wustl.edu/arc https://db.humanconnectome.org/schemas/project/project.xsd http://nrg.wustl.edu/val https://db.humanconnectome.org/schemas/validation/protocolValidation.xsd http://nrg.wustl.edu/xnat https://db.humanconnectome.org/schemas/xnat/xnat.xsd http://nrg.wustl.edu/xnat_assessments https://db.humanconnectome.org/schemas/assessments/assessments.xsd http://www.nbirn.net/prov https://db.humanconnectome.org/schemas/birn/birnprov.xsd http://nrg.wustl.edu/security https://db.humanconnectome.org/schemas/security/security.xsd">
<xnat:sharing>
<xnat:share label="100307_NonToolbox" project="HCP_Q1">
<!--hidden_fields[xnat_experimentData_share_id="1",sharing_share_xnat_experimentDa_id="ConnectomeDB_E00299"]-->
</xnat:share>
<xnat:share label="100307_NonToolbox" project="HCP_Q2">
<!--hidden_fields[xnat_experimentData_share_id="77",sharing_share_xnat_experimentDa_id="ConnectomeDB_E00299"]-->
</xnat:share>
</xnat:sharing>
<xnat:subject_ID>ConnectomeDB_S00230</xnat:subject_ID>
<nt:HCPPNP>
<nt:mars_log_score>1.76</nt:mars_log_score>
<nt:mars_errs>0</nt:mars_errs>
<nt:mars_final>1.76</nt:mars_final>
</nt:HCPPNP>
<nt:DDISC>
<nt:SV_1mo_200>103.13</nt:SV_1mo_200>
<nt:SV_6mo_200>46.88</nt:SV_6mo_200>
<nt:SV_1yr_200>103.13</nt:SV_1yr_200>
<nt:SV_3yr_200>21.88</nt:SV_3yr_200>
<nt:SV_5yr_200>21.88</nt:SV_5yr_200>
<nt:SV_10yr_200>9.38</nt:SV_10yr_200>
<nt:SV_1mo_40000>19375.0</nt:SV_1mo_40000>
<nt:SV_6mo_40000>29375.0</nt:SV_6mo_40000>
<nt:SV_1yr_40000>24375.0</nt:SV_1yr_40000>
<nt:SV_3yr_40000>9375.0</nt:SV_3yr_40000>
<nt:SV_5yr_40000>9375.0</nt:SV_5yr_40000>
<nt:SV_10yr_40000>9375.0</nt:SV_10yr_40000>
<nt:AUC_200>0.16217604</nt:AUC_200>
<nt:AUC_40000>0.31145853</nt:AUC_40000>
</nt:DDISC>
<nt:NEO>
<nt:NEO>144</nt:NEO>
<nt:NEOFAC_A>33</nt:NEOFAC_A>
<nt:NEOFAC_O>24</nt:NEOFAC_O>
<nt:NEOFAC_C>35</nt:NEOFAC_C>
<nt:NEOFAC_N>15</nt:NEOFAC_N>
<nt:NEOFAC_E>37</nt:NEOFAC_E>
</nt:NEO>
<nt:SPCPTNL>
<nt:SCPT_TP>59</nt:SCPT_TP>
<nt:SCPT_TN>115</nt:SCPT_TN>
<nt:SCPT_FP>5</nt:SCPT_FP>
<nt:SCPT_FN>1</nt:SCPT_FN>
<nt:SCPT_TPRT>412.0</nt:SCPT_TPRT>
<nt:SCPT_SEN>0.9833</nt:SCPT_SEN>
<nt:SCPT_SPEC>0.9583</nt:SCPT_SPEC>
<nt:SCPT_LRNR>11</nt:SCPT_LRNR>
</nt:SPCPTNL>
<nt:CPW>
<nt:IWRD_TOT>35</nt:IWRD_TOT>
<nt:IWRD_RTC>1442.0</nt:IWRD_RTC>
</nt:CPW>
<nt:PMAT24A>
<nt:PMAT24_A_CR>17</nt:PMAT24_A_CR>
<nt:PMAT24_A_SI>2</nt:PMAT24_A_SI>
<nt:PMAT24_A_RTCR>11839.0</nt:PMAT24_A_RTCR>
</nt:PMAT24A>
<nt:VSPLOT24>
<nt:VSPLOT_TC>9</nt:VSPLOT_TC>
<nt:VSPLOT_CRTE>834.3</nt:VSPLOT_CRTE>
<nt:VSPLOT_OFF>29</nt:VSPLOT_OFF>
</nt:VSPLOT24>
<nt:ER40>
<nt:ER40_CR>39</nt:ER40_CR>
<nt:ER40_CRT>1471.0</nt:ER40_CRT>
<nt:ER40ANG>8</nt:ER40ANG>
<nt:ER40FEAR>8</nt:ER40FEAR>
<nt:ER40HAP>8</nt:ER40HAP>
<nt:ER40NOE>8</nt:ER40NOE>
<nt:ER40SAD>7</nt:ER40SAD>
</nt:ER40>
<nt:ASR>
<nt:ASRSyndromeScores>
<nt:ASR_anxdp_raw>3</nt:ASR_anxdp_raw>
<nt:ASR_anxdp_t>50</nt:ASR_anxdp_t>
<nt:ASR_wthdp_raw>0</nt:ASR_wthdp_raw>
<nt:ASR_wthdp_t>50</nt:ASR_wthdp_t>
<nt:ASR_som_raw>0</nt:ASR_som_raw>
<nt:ASR_som_t>50</nt:ASR_som_t>
<nt:ASR_tho_raw>1</nt:ASR_tho_raw>
<nt:ASR_tho_t>50</nt:ASR_tho_t>
<nt:ASR_att_raw>1</nt:ASR_att_raw>
<nt:ASR_att_t>50</nt:ASR_att_t>
<nt:ASR_agg_raw>3</nt:ASR_agg_raw>
<nt:ASR_agg_t>51</nt:ASR_agg_t>
<nt:ASR_rule_raw>1</nt:ASR_rule_raw>
<nt:ASR_rule_t>51</nt:ASR_rule_t>
<nt:ASR_int_raw>1</nt:ASR_int_raw>
<nt:ASR_int_t>50</nt:ASR_int_t>
<nt:ASR_other_raw>8</nt:ASR_other_raw>
<nt:ASR_critical_raw>2</nt:ASR_critical_raw>
<nt:ASR_cmp_internalizing_raw>3</nt:ASR_cmp_internalizing_raw>
<nt:ASR_cmp_internalizing_t>39</nt:ASR_cmp_internalizing_t>
<nt:ASR_cmp_externalizing_raw>5</nt:ASR_cmp_externalizing_raw>
<nt:ASR_cmp_externalizing_t>46</nt:ASR_cmp_externalizing_t>
<nt:ASR_cmp_other_raw>10</nt:ASR_cmp_other_raw>
<nt:ASR_cmp_total_raw>18</nt:ASR_cmp_total_raw>
<nt:ASR_cmp_total_t>40</nt:ASR_cmp_total_t>
</nt:ASRSyndromeScores>
<nt:ASRDsmScores>
<nt:DSM_dep_raw>1</nt:DSM_dep_raw>
<nt:DSM_dep_t>50</nt:DSM_dep_t>
<nt:DSM_anx_raw>3</nt:DSM_anx_raw>
<nt:DSM_anx_t>50</nt:DSM_anx_t>
<nt:DSM_som_raw>0</nt:DSM_som_raw>
<nt:DSM_som_t>50</nt:DSM_som_t>
<nt:DSM_avoid_raw>1</nt:DSM_avoid_raw>
<nt:DSM_avoid_t>50</nt:DSM_avoid_t>
<nt:DSM_adh_raw>4</nt:DSM_adh_raw>
<nt:DSM_adh_t>51</nt:DSM_adh_t>
<nt:DSM_inatt_raw>1</nt:DSM_inatt_raw>
<nt:DSM_hyp_raw>3</nt:DSM_hyp_raw>
<nt:DSM_asoc_raw>2</nt:DSM_asoc_raw>
<nt:DSM_asoc_t>51</nt:DSM_asoc_t>
</nt:ASRDsmScores>
</nt:ASR>
</nt:NTScores>
>>>

That's a lot of stuff. Let's take it line-by-line.

The first line, <?xml version="1.0" ... , just tells us that this is an XML document.

The second line, <nt:NTScores ID="ConnectomeDB_E00299" ..., is the start of the actual content. It tells us that this is a N(on)T(oolbox)Scores document, gives us the experiment ID (the XNAT site-wide identifier), the project ID, the experiment labels (the human-readable, in-project-context name), and ends with a bunch of namespace information in case we want to validate this document against the schema we were looking at earlier. (I don't. You're welcome to if you like.)

The next few lines, <xnat:sharing> through </xnat:sharing>, tell us what projects know about this experiment. We can skip over this. (Yes, there's an HCP_Q2 project. No, it's not ready for you to look at yet.)

Next comes the subject ID; again, this is the XNAT site-wide ID, not the human-readable name (label). We can use pyxnat to ask ConnectomeDB for the label in a specified project:

>>> q1_proj.subject('ConnectomeDB_S00230')
'100307'
>>>

After that come the scores (and lots of them), organized into a few groups. The schema document nontoolbox.xsd may be useful in helping to decipher this. We can ask for individual scores by walking the XML DOM:

>>> nt = q1_proj.subject('100307').experiment('100307_NonToolbox')
>>> nt.xpath('nt:ER40/nt:ER40_CR')
[<Element {http://nrg.wustl.edu/nt}ER40_CR at 0x102065370>]
>>> nt.xpath('nt:ER40/nt:ER40_CR')[0].text()
'39'
>>>

That's a slow way of retreiving scores, since we need a full HTTP request and response for each field. If we want multiple scores -- either more than one score from a single experiment, or one or more scores from each of multiple experiments, there are more efficient methods.

Let's start with selecting multiple scores for a single experiment. A reasonable approach is to grab and parse the entire experiment XML document, using the Python standard library module ElementTree:

>>> import xml.etree.ElementTree as ET
>>> nt_dom = ET.fromstring(nt.get())
>>> nt_dom.tag
'{http://nrg.wustl.edu/nt}NTScores'
>>> er40 = nt_dom.find('{http://nrg.wustl.edu/nt}ER40')
>>> [[e.tag,e.text] for e in er40]
[['{http://nrg.wustl.edu/nt}ER40_CR', '39'], ['{http://nrg.wustl.edu/nt}ER40_CRT', '1471.0'], ['{http://nrg.wustl.edu/nt}ER40ANG', '8'], ['{http://nrg.wustl.edu/nt}ER40FEAR', '8'], ['{http://nrg.wustl.edu/nt}ER40HAP', '8'], ['{http://nrg.wustl.edu/nt}ER40NOE', '8'], ['{http://nrg.wustl.edu/nt}ER40SAD', '7']]
>>>

...

Now that OHBM and the hackathon are past, most of this tutorial has been moved to a generic tutorial on pyxnat and ConnectomeDB. A few topics, mostly related to AWS and the S3 mirror of the Q1 data, remain here.

Accessing imaging data

Before trying to access the data, it's important to understand what's in the Q1 release. The Q1 data release documentation describes the session structure and file layout in detail. The Q1 imaging data are mirrored on Amazon's S3, which is particularly useful for copying data into an EC2 instance. The Python library boto provides an interface for many of Amazon's web services, including S3. If you installed the HCP-customized pyxnat, you already have boto.

I'm working to extend pyxnat to translate between the internal storage paths on ConnectomeDB and the (differently organized) copy of the data on S3. You really don't need to wait for this, though: the example searches above produce subject labels, which is enough information to point you to the right data directories on S3 – for example, subject 100307's data can be found at s3://hcp.aws.amazon.com/q1/100307/ . Consult the hackathon HCP data release announcement for more details.

Browsing the imaging data

In order to start exploring and using the S3 Q1 mirror, you'll need to set up your AWS account and get access to the Amazon-hosted data. This process will get you an access key and a secret key, which you'll use to authenticate against S3. From a Python command line, you can browse the Q1 data by getting a handle to the "bucket" where the data are stored:

Code Block
>>> from boto.s3.connection import S3Connection
>>> s3 = S3Connection('your-access-key','your-secret-key')
>>> bucket = s3.get_bucket('hcp.aws.amazon.com')

S3 is a key-value-oriented store, rather than a hierarchical filesystem, but the HCP Q1 data is stored with keys that echo a regular file system. boto's interface to S3 makes it easy to pretend you're walking a file tree. There is a single root element q1, and each subject is a child of that root:

Code Block
>>> [k.name for k in bucket.list('q1/','/')]
[u'q1/', u'q1/100307/', u'q1/103515/', u'q1/103818/', u'q1/111312/', u'q1/114924/', u'q1/117122/', u'q1/118932/', u'q1/119833/', u'q1/120212/', u'q1/125525/', u'q1/128632/', u'q1/130013/', u'q1/131621/', u'q1/137128/', u'q1/138231/', u'q1/142828/', u'q1/143325/', u'q1/144226/', u'q1/149337/', u'q1/150423/', u'q1/153429/', u'q1/156637/', u'q1/159239/', u'q1/161731/', u'q1/162329/', u'q1/167743/', u'q1/172332/', u'q1/182739/', u'q1/191437/', u'q1/192439/', u'q1/192540/', u'q1/194140/', u'q1/197550/', u'q1/199150/', u'q1/199251/', u'q1/200614/', u'q1/201111/', u'q1/210617/', u'q1/217429/', u'q1/249947/', u'q1/250427/', u'q1/255639/', u'q1/304020/', u'q1/307127/', u'q1/329440/', u'q1/355542/', u'q1/499566/', u'q1/530635/', u'q1/559053/', u'q1/585862/', u'q1/611231/', u'q1/638049/', u'q1/665254/', u'q1/672756/', u'q1/685058/', u'q1/729557/', u'q1/732243/', u'q1/792564/', u'q1/826353/', u'q1/856766/', u'q1/859671/', u'q1/861456/', u'q1/865363/', u'q1/877168/', u'q1/889579/', u'q1/894673/', u'q1/896778/', u'q1/896879/', u'q1/901139/', u'q1/917255/', u'q1/937160/']

Each subject is organized as described in the Q1 data release documentation.

Code Block
>>> [k.name for k in bucket.list('q1/100307/','/')]
[u'q1/100307/.xdlm/', u'q1/100307/Diffusion/', u'q1/100307/MNINonLinear/', u'q1/100307/T1w/', u'q1/100307/release-notes/', u'q1/100307/unprocessed/']

The three key fragments associated with the "minimally preprocessed" data are Diffusion, MNINonLinear, and T1w. The key fragment .xdlm marks file manifests for download integrity checking, including checksums; while unprocessed marks unprocessed data.

Downloading files from S3

Now let's copy all files for the motor task with left-to-right phase encoding from S3 to a local disk.

Code Block
>>> import errno,os,os.path
>>> for k in bucket.list('q1/100307/MNINonLinear/Results/tfMRI_MOTOR_LR/'):
...   dir = os.path.dirname(k.name)
...   try: os.makedirs(dir)
...   except OSError as exc:
...     if exc.errno == errno.EEXIST and os.path.isdir(dir): pass
...     else: raise
...   with open(k.name, 'w') as f:
...     k.get_contents_to_file(f)
...
>>> 

Note that we use bucket.list(...)a little differently here: with one argument, it returns all keys starting with the provided text, which is comparable to a full recursive listing in a hierarchical file system.

Accessing S3 data through the Subject object

The instructions above handle ConnectomeDB and S3 as different worlds; there is some limited support for bridging this gap. Pyxnat can maintain a local mirror of the ConnectomeDB data, with contents downloaded on request. The first piece you'll need is an object representing both sides of the mirror (the local file space and the S3 bucket):

Code Block
>>> from pyxnat.core.mirror import S3Mirror
>>> mirror = S3Mirror.open('hcp.aws.amazon.com','your-access-key','your-secret-key','/path/to/local/mirror')

Next, you'll need to hand this to the pyxnat Interface: 

Code Block
>>> cdb = pyxnat.Interface('https://db.humanconnectomem.org','username','password',data_mirror=mirror)
>>>

You can now access local copies of the data files through the subject object: 

Code Block
>>> cdb.project('HCP_Q1').subject('100307').files('MNINonLinear/Results/tfMRI_MOTOR_LR')
[u'/path/to/local/mirror/q1/100307/MNINonLinear/Results/tfMRI_MOTOR_LR/...]

The return value from this call is a list of local file paths for the requested data. If the files are already on your disk, this will return quickly; if not, the files are downloaded before the files() call returns. You can request all of a subject's data by skipping the root path argument: 

Code Block
>>> cdb.project('HCP_Q1').subject('100307').files()

Note that this will most likely take a really long time, because a single subject is close to 20 GB, unless you have a very fat pipe to S3 -- say, from an EC2 instance. (This is why I'm not even showing the return value. It's a long list of files. You probably should find think about whether you really need all those files, and find a better way to get them if you do.)

Table of Contents

Table of Contents