Boreal Data Foundry Documentation

INTRODUCTION

This page explains how to prepare (package) a dataset, document it and upload it to the Boreal Data Foundry. The procedure involve three steps that should be performed in the proposed order. You must have access to this web site in order to perform one of these actions. You can request an access by contacting Pierre Racine.

TOUT DOUX
List of dataset to document in priority by person responsible.
Remove each item once it has been documented.

Hermann

Kim

  • All stream networks we used
  • All the ranker layers (CMI, NALC, GPP, etc)
  • GFWC Intact layers as used by BEACONS

Trish

  • Replacing all the TBD values in all the climate data previously documented by Pierre with the proper values.
  • Leaf Area Index
  • NDVI
  • VCF
  • bioclimatic layers
  • LCC reclassification BAM
  • GPP original and Derived
  • NPP original and Dervied

Pierre

  • Quebec Third Forest Inventory Road Network
  • Quebec Fourth Forest Inventory
  • Ontario Forest Inventory
  • Newfoundland Forest Inventory
  • AlPac source ASCII files

The main objectives of the BAM & BEACONs Data Inventory is to ease the use of often used datasets for BEACONs and BAM users. This is achieved in many ways:

  • By archiving our own copies of public datasets - Public datasets are generally hard to find, are often relocalised and server are often down. Having our own archive ensure that our users don't lose time searching for a dataset and that it is available even if the main server is down.
  • By keeping historical copies of public datasets - Public datasets are often updated and old copies are often discarted even if they provide valuable historical data. Having our own copies make sure we will conserve old data.
  • By documenting datasets in a standard way - Public datasets are generally poorly documented and provided documentation is generally very hard to understand. Document often used datasets in our own standard way ease the identification of certain features of a dataset and save time to our users.
  • By archiving and documenting data produced internally - Data produced internally might be usefull to other BEACONs & BAM users. It is very important to archive and document them so they don't get lost. Having our own data repository allow us to store all produced datasets in a single store and force us to document them.
  • By simplifying understanding of datasets - Public datasets are often composed of a multitude of theme and it is hard to quickly identify what file is the proper one. By repackaging public datasets into single theme packages and by making sure every datasets are properly documented, we simplify the life of our users.

The inventory itself and the procedure are built in a way to:

  • Allow people who know best the datasets to document them themself in order to avoid production of incorrect information about the datasets.
  • Force people to write minimal documentation.
  • Ensure that only one person has a final look at an archived product to enforce consistent documentation and packaging.

Condition to inventory a dataset

Before considering a dataset for inclusion in this inventory:

  1. Make sure the dataset is not already inventoriated... If it is you can double check the metadata in case you would have precious complementary information to add or some wrong information to fix.
  2. Is the dataset pertinent to inventoriate? Ask yourself, "Is this dataset useful enough so other BAM and BEACONs will use it at some point?" If not reconsider the usefulness of adding this dataset to the inventory. Another way to put it is: "Does anybody would bother if I throw this dataset to the garbage?" If the answer is yes, then you should add this dataset to the inventory.
  3. Is there a better dataset representing the same data? If you know another dataset that better represent the same data as the the dataset you are to add to the inventory, you should consider adding this latter dataset to the inventory instead. We also recommand that you replace old, too specific or small datasets with newer or more complete datasets.

NOTE: In this documentation the terms "dataset" and "product" are used interchangeably. The terms "repository", "inventory" and "foundry" are also used interchangeably.


FIRST STEP - Package your dataset

If the dataset is sufficiently simple (it contains only one theme) and self documented (it contains proper documentation files) you don't need to repackage it. You can upload it the way you have obtain it. But most of the time you will want to repackage a dataset in order to:

  • simplify its composition so it represent only one theme,
  • add some documentation when none exist,
  • convert data stored in an esoteric format to a well known format.

Here are more detailled guidelines to prepare your dataset before documentating and uploading it:

  1. Make sure the dataset represent only one theme - There should not be more than one theme (one layer) per dataset. If your dataset contains more than one theme, try to separate each of them into different datasets. People are generally searching for a single theme when searching for a dataset. Having every dataset hold only one theme helps them quickly identify which dataset they need and avoid them loosing time trying to understand the composition of a dataset. Remember that we build this inventory to save time to our BEACONs and BAM users. Personnify yourself as a newcomer and make sure you can quickly understand what is what in the dataset. If it is not possible, simplify the dataset and document it as well as you can. It should be straigforward for users to quickly identify what is the theme of a layer and what is the meaning af the different values stored in the dataset. Having only one theme per dataset also avoids the download of useless parts of sometimes very large datasets.
  2. Convert files when appropriate - Some dataset are sometimes stored in files formats nobody have ever heard of. If you think BEACONs or BAM users might have hard time reading, importing or converting these data into software like ArcGIS, SAS or Excel then you should convert them to a more familiar format for them if this does not alter the integrity of the data. We recommand for example to convert ESRI coverage files to shapefiles and ESRI GRID files to TIF.
  3. Define a projection for shapefiles or images that lack one - Many geospatial datasets we download from the web do not have a coordinate system explicitely defined. If this is the case and you know which coordinate system should be assigned to the dataset or this information is provided in the dataset`s documentation, explicitely assign this coordinate system to the data files. Users often loose a lot of time figuring why a shapefile does not fit with other layers or what is the coordinate system associated with it. If you do it for them, you will save them a lot of time.
  4. If documentation does not exist, add or write some! - Do not write documentation only in the inventory web page. If your dataset does not contains any documentation, at minimum please copy what you wrote in the inventory page into a Word or PDF document, and put it in every ZIP file of your dataset (or at least at the base of the dataset folder). If you downloaded the dataset and it does not include a documentation file, try to find it and include it in your zip file. It is often very easy to print a documentation web page as a PDF and include it as a file in the package. If you obtained the dataset from an individual and some important details were only transmited by email, put a copy of them in text or PDF format. Remember the rule that an undocumented dataset is USELESS to anyone except you ("you" being included after some months without using the dataset. Memory fails...)
  5. Join the license agreement - Make sure to correctly document if and how the dataset is licensed. You should always provide a copy of the license (in PDF format) in the package.
  6. Sort the file into meaningful folders - The package should be structured in order to easily make sence of what is what (data files, documentation files, license files and other files (tools or any other files related to the dataset). Every package should have a structure similar to this:
     FOCO-NRC-01 (FoundryCode of the dataset)
          |
           -- Data (only data files, zipped into cohesive groups to avoid download of unnecessary files)
          |    |
          |     -- pt_pet27.zip
          |
           -- Documentation.zip (all the documentation files zipped together)
          |
           -- License.zip (all the license files zipped together)
          |
           -- Tools or RelatedFiles (other related files, zipped)
  1. ZIP your files - This avoids downloading problems and makes downloads faster. If the dataset covers many years or months, ZIP each year and month separately so users can download only the period they are interested in. If there are too many years or months, group them by time period e.g. "1990-2000".

SECOND STEP - Document your dataset

All the datasets listed in the inventory are documented in a CSV file that is easily imported into Excel or any software handling tabular data. To add metadata about a dataset we will first import the existing metadata CSV file in Excel, you will then add a line (or many) and then put the CSV file back in the web page. To modify existing documentation or add a new dataset:

  1. Go to this page. It contains a CSV file.
  2. Login and edit the page (the "Edit" (Éditer) button will appear at the upper right corner).
  3. Click in the box containing the data and select all the content (CTRL-A), copy it (CTRL-C) and paste it (CTRL-V) as a new document in your favorite simple text editor .
  4. Save the file with a .csv extension. e.g. "datainventory.csv". You can close the text editor.
  5. Double-click on the file. It should open in Excel if it is installed.
  6. Edit existing documentation or add a new line for your new dataset. Description of column to complete is provided below.
  7. Save your file as a CSV file, open it in your favorite simple text editor, copy its content back into the edited page and save.

There might be a problem if you are editing the CSV file at the same time of another person. You should make sure that nobody edited the page containing the CSV file before updating it. You can check that using the "History" button in the upper menu of the page. If someone updated it then you should save again the modified version it on your box, add only the lines you modified and resave it in the page quickly. If the CSV file becomes corrupted, warn Pierre Racine.

If the CSV columns do not get separated correctly in Excel contact Pierre Racine.

Fields Documentation

Here is the description of every column (field) you will find in the CSV file. You SHOULD fill in every column for every product you are documenting.

Some fields do not accept upper case characters: DocumentationURL, Category, Theme, CompleteTheme, VariableTypeUnit, DataStructure, LicenseURL.

Some fields are are composed of sentences. Each sentence should start by a capital letters and finish with a point: Name, Description, Comment, DataFileDescription.

Field Description
Name What is the name of the dataset?

Copy the name of the dataset. If the dataset has no name or it lacks meaning, create one. It should tell something about the producer, the complete theme, the geographical coverage and the temporal coverage.

e.g. "University of Alberta Mean Temperature per month for Saskatchewan from 1900 to 2000"

The "Name" appears in the first line in the yellow box of the data inventory page following the product FoundryCode.

Description What exactly is this dataset? How was it built? How is it structured?

This is probably the most important and the most difficult field to complete. By reading this description one should know exactly what is this dataset and know briefly how it was produced and why. You should briefly describe the structure of the files, the naming scheme and where is the documentation allowing a quick understanding of the tables structure, the column names and the code values meaning. Everything you would have liked to put in the name but that would have made it too long should go here... But be brief, this is not a complete documentation either.

Comments Is there anything we should know in order to be able to use this dataset or decide if we need it?

Write here anything that is too specific to go in the description but that could influence users decision to download the dataset or not. For example: if there is anything special about a column or if there is known problem with the dataset, if there is anything specific about the creation of this dataser that users should know, etc...

If there is no comments, write "No comments.".

SourceOrganization What is the name of the organization that produced this dataset?

e.g. "Natural Resources Canada, Canadian Forest Service", "University of Alberta", etc...

SourceContact What is the name of the technical contact responsible for the distribution of information about this dataset?

This might be the person who directly produced the dataset or just a contact person. Add this person's contact information: email adress, phone, etc...

e.g.
"Trish Fontaine
Trish.Fontaine@afhe.ualberta.ca
780-492-1497"

DocumentationURL What is the URL of the web site where the documentation for this product is located?

e.g. "ftp://ftp.nofc.cfs.nrcan.gc.ca/README_CFS-ClimScen_Overview_v0.4.rtf"

If and only if the documentation is not on a web site and you provided copies of it in the archive, write "Companion file".
If there is no documentation, write "No documentation". (This is to be avoided (!!!) since you can always create a PDF with the detailed view of the documentation you are writing  and join this PDF to the package or, if the dataset is paper only, a printed version. You can create a PDF using CutePDF ). If you join this document to the dataset, write "Companion file" as DocumentationURL.

DataURL What is the URL of the web site where the data have been taken from?

Do not confuse this with the "FTPFolder" field which is the folder containing the Foundry version. "DataURL" stores the URL of the original source.

e.g. "ftp://ftp.nofc.cfs.nrcan.gc.ca/canada-10km/historical/ascii/"

If the data is digital but does not exist on any website, write "No site".
If the data exists only in paper form, write "Paper".
If the data were created by BAM or BEACONs and therefore exist only in the Foundry FTP site, write "See Foundry FTP Folder".

Category To which category does the theme of the dataset belong?

We will create new categories as we have more themes. The category should be all in lower case. Existing categories are:

  • climate - precipitation, temperature, air pressure, snow depth, etc...
  • communication network - roads, railways, transmission lines, etc...
  • ecological criteria - water edge, etc...
  • ecological stratification - ecozones, ecoregions, ecodistricts, ecoprovinces, etc...
  • hydrology - basin, drainage area, rivers, lakes, flow rate, etc...
  • intactness - landscape intectness, etc...
  • land classification - land cover, forest cover, etc...
  • relationship - connectivity, etc...
  • topography - elevation, slope, aspect, etc...
Theme What does this layer represent in general? What basically is the theme of this layer?

Ask yourself: "What does a single feature (point, line, polygon, area, pixel) represent in general?" For tabular dataset: "What does a single line of data represent in general?" We will call this the base theme (in opposition to the complete theme, below, which is what the feature represent exactly.)

e.g. "temperature", "elevation", "road", "river", "forest cover", "potential evapotranspiration", "drainage area", etc...

Make sure not to add existing theme names and to create and document a four letter code for the "FoundryCode" column. Themes should all be lower case and singular. Take care to differenciate the "Theme" from the "CompleteTheme" described below.

If you are archiving a dataset containing only climatic data, write "climatic data".
If you are archiving a dataset containing less than four (4) themes, you can separate them with a comma ",".
If you are archiving a dataset more than three (3) themes (this should be avoid as far as possible) write "various".

CompleteTheme What is the complete or exact name of the theme?

Ask yourself: "What does a single feature (point, line, polygon, area, pixel) represent exactly?" For tabular dataset: "What does a single line of data represent exactly?" We will call this the complete theme (in opposition to the base theme, above, which is what the feature represent in general.) This "CompleteTheme" could, for example, includes the frequency at which the measure is taken (bidaily, daily, weekly, etc...) or the time scale at which it is aggregated (daily, weekly, monthly, annually, etc...).

e.g. "annual mean temperature", "monthly maximum temperature", "limited access road", "major river", "30-year normals Priestley-Taylor potential evapotranspiration", etc...

Themes should all be lower case and singular. Take care to differenciate the "CompleteTheme" from the "Theme" described above. If the complete theme is a base theme in itself, just copy it from the theme.

VariablesTypeUnit What are the variables contained in this dataset? What are the types and the units of these variables?

For raster layers, this is generally the same as the theme. Add the pixel type and the unit of the variable separated by a comma in parentheses.

e.g. "temperature(8-bit unsigned integer, celcius)"

For vector layers and tabular and paper datasets, list all the significant variables (or columns) contained in the table. Add their respective types and units in parentheses.

If the dataset is stored in a non-typed format like Excel or paper write "integer" or "decimal" for the type according to the type of the values stored.

In any case, if the variable is not a measure, write, in place of type and unit:

  • "number" if it is a numeric representing a number. e.g.: "126", "374", "3", etc...
  • "percentage" if it is a numeric representing a percentage. e.g.: "90.3", "4", etc...
  • "year" if it is a numeric representing a year. e.g.: "1956", "1970", "2008", etc...
  • "date" if it is a date or a text representing a date. e.g.: "4 novembre 1956", "10-12-1970", "2008/03/07", etc...
  • "hour" if it is an alphanumeric representing an hour. e.g.: "19:30", "2PM", "6h32", etc...
  • "code" if it is an alphanumeric code identifying a category. e.g.: "AAB", "2", "Water", etc...
  • "identifier" if it is an alphanumeric identifier that does not identity a category and is not necessarily unique. e.g.: "12", "1456-RTG3", "Water", etc...
  • "unique identifier" if it is an alphanumeric identifier that does not identity a category and IS unique. e.g.: "12", "1456-RTG3", etc...

Separate each "variable(type, unit)" set with a comma.

e.g. "temperature (integer, celcius), elevation (integer, meter)", "area (double, square meter), cover type (text, code)", etc...

If you wrote "various" for the Theme, write "NA".

FoundryCode What is the product Foundry Code for this dataset?

This is OUR code that we create to uniquely identify a dataset in the Foundry. Create it using four letters for the theme of the product, three letters for the source organization, and two numbers to distinguish make unique codes when the variable and the source are identical. Existing code for the theme and the source organization are listed below.

e.g. "TEMP-NRC-01", "PREC-UAL-04", etc...

Theme code

  • BASN - basin
  • CLIM - climate data
  • CMIN - climate moisture index
  • CONN - connectivity
  • DRAA - drainage area
  • FOCO - forest cover
  • ECOD - ecodistrict
  • ECOP - ecoprovince
  • ECOR - ecoregion
  • ECOZ - ecozone
  • EDEN - edge density
  • ELEV - elevation
  • INDX - index
  • LACO - land cover
  • LINT - intactness
  • PHOC - photo centroid
  • POEV - potential evapotranspiration
  • PREC - precipitation
  • PROA - protected area
  • ROAD - roads
  • TEMP - temperature
  • VARI - various
  • TBD - Theme have still to be determined

Organization code

Federal Organizations

  • NRC - Natural Resources Canada

Provincial Organizations

  • NRQ - Ministère des ressources naturelles et de la faune du Québec

Universities

  • UAB - University of Alberta
  • ULA - University Laval
  • FOM - University Laval Montmorency Forest
  • ENC - Environment Canada
  • AAC - Agriculture and Agri-Food Canada

Research Groups

  • BEA - BEACONs, University of Alberta
  • NCA - National center for Atmospheric research, Colorado, USA

Others

  • GFW - Global Forest Watch Canada
  • TIM - Timberline Natural Resource Group

Please add any code you create to this section of the documentation.

OriginalProduct What is the Product Code of the product this product has been derived from?

e.g. "PREC-NRC-01"

To keep the metadata database consistent, the product identified here should also have the product currently being documented listed as DerivedProduct.

If the dataset is not derived from another product, write "not derived".
If the dataset is derived from a product that is not archived in this catalog, write the name and description of the original dataset with the URL of the web page where the data are located. e.g. "Landsat from http://www.landsat.org/" This description should not include any comma (,).

DerivedProducts Which products (identified by their Product Code) are derivations or subsequent versions of this product?

If many products are derived, separate their code with a comma. To keep the metadata database consistent, every product listed here should have the product currently being documented listed as OriginalProduct.

e.g. "PREC-NRC-03, PREC-NRC-04"

If no product is derived from this dataset, write "none".
If a product derived from this dataset is not archived in this catalog, write the name and description of the original product with the URL of the web page where it is located. e.g. "Landsat from http://www.landsat.org/"

SimilarAndComplementaryProducts Which products (identified by their Product Code) are similar or complementary to this product?

Similar or complementary datasets are datasets which are not derived, nor the original but, still represent the same theme but were created by different entities (persons or organisms), with slightly different methods, at different moment or at different temporal or spatial scale.

When many products are similar or complementary, separate their code with a comma. To keep the metadata database consistent, every product listed here should have the product currently being documented listed as SimilarAndComplementaryProducts.

e.g. "PREC-NRC-03, PREC-NRC-04"

If no product are similar or complementary to this dataset, write "none".
If some products, similar or complementary to this dataset, are not archived in this catalog, write the name and description of the original product with the URL of the web page where is it located (within parenthesis if other products are mentionned). e.g. "Landsat from http://www.landsat.org/"

DataStructure Is this a paper form, a paper map, a digital tabular, a digital vector or a raster dataset?

Can be "paper", "paper map", "tabular", "vector (various)", "vector (point)", "vector (line)", "vector (polygon)" or "raster" all in lower case.

Write "vector (various)" only when the Theme is "various" and that the product contains layers with different spatial models.

SpatialCoverage What is the geographical coverage offered by this dataset?

Should always go from the more general to the more specific so that it is easy to sort the table based on this field.

e.g. "Canada", "Canada Boreal", "Québec", "Québec Boreal", "British-Columbia", etc...

TemporalCoverage What is the temporal coverage offered by this dataset?

This can be a time range.

e.g. "1998", "1990-2000", "jan1995-oct1995", etc...

If there is no temporal coverage, write "NA"

ScaleResolution What is the scale for a vector layer? What is the resolution for a raster layer? (in meters)

e.g. to indicate the scale of a vector dataset: "1:10000"
e.g. to indicate the resolution of a raster dataset: "30"

If a vector dataset was produced using data sources at various scale or resolution, write "Various".
If this is a paper or a tabular dataset, write "NA".
If, really, you are not able to determine the scale at which were produced a vectorial layer, write "Unknown".

CoordinateSystem What is the coordinate system of the data files?

If this is a paper or a tabular dataset, write "NA".

The first step is to determine if the coordinate system (CS) of the dataset is a ESRI or a EPSG standard CS. Compare its parameters (visible from the layer "Properties" dialog box in ArcCatalog) with those of a corresponding ESRI CS selectable from the ESRI hierarchy of CS availables through the "Select" option of same dialog box.

If you can find a CS with the same parameters then it is an ESRI CS and you must find its code in one of these two documents:

If the corresponding code is lower than 32767 it is a standard EPSG CS. You should then describe it like this: "EPSG:Code: Name"

Some commun EPSG geographic coordinate systems are:

  • EPSG:4269: NAD83
  • EPGS:4326: GCS WGS 1984

Some commun EPSG projected coordinate systems are:

  • EPSG:32198: NAD83/Quebec Lambert

If the corresponding code is higher than 32766, it is an ESRI CS. You should then then describe it like this: "ESRI:Code: Name"

Some commun ESRI projected coordinate systems are:

  • ESRI:102002: NAD83/Canada_Lambert_Conformal_Conic
  • ESRI:102009: NAD83/North_America_Lambert_Conformal_Conic

Make sure to write both the datum and the projection. In both cases you don't have to document the CS parameters.

You can easily search and double-check the number associated with the CS by searching for the number or a keyword in this web site . If this is a EPSG standard CS, you can also double check in the official EPSG CS Microsoft Access Database available at http://www.epsg.org/Geodetic.html 

If you can not find a corresponding CS that means the CS was created specifically for this dataset. You should describe it like this: "SPECIFIC CS: Datum/Name Parameters" You can directly copy all the parameters from the "Properties->XP Coordinate System" window of ArcCatalog.

e.g.
"SPECIFIC CS: NAD83/Lambert Conformal Conic
Central Meridian: -95.00000
1st Standard Parallel: 49.00000
2nd Standard Parallel: 77.00000
Latitude of Origin: 0.00000
False Easting (meters): 0.00000
False Northing (meters): 0.00000"

FTPFolder What is the URL of the dataset folder on the Foundry FTP site?

Do not confuse this with the "DataURL" field which is the original site from which those data were copied. "FTPFolder" stores the URL where data were copied for exclusive BAM & BEACONs usage. Only the repository main manager should edit this field since he is the one who will copy the dataset to the main repository.

e.g. "ftp://bdf.sbf.ulaval.ca/BOREALDATAFOUNDRY/climate/original"

DataFileDescription How are the files arranged in the archive?

This is to have a quick idea of how the data files are structured and to be able to double-check that what we are seeing in the folder is as expected. There is no need to list the documentation and licence files. If there is only a few files, list them (ex. 1). If they are too numerous describe them (ex. 2). You can also express the list with patterns (ex. 3).

Textual descriptions written as sentences should start with a capital letter and finish with a point.

If the dataset is in paper form, describe the how the files are organized (stapled, ring banded, etc..) and how they look like (dimensions, color, etc...) (ex 4).

Example 1
3 tar files each containing 1 raster for each month of every contained year:
hist_maxt_abs_ascii_1971-1980.tar
hist_maxt_abs_ascii_1981-1990.tar
hist_maxt_abs_ascii_1991-2000.tar

Example 2
24 zipped shapefiles, two per month for 2002

Example 3
2 raster files for each month of every year from 1971 to 2000:
m_MMM_71_00
sd_MMM_71_00

with MMM = month (jan-dec)

Example 4
Feuilles vertes 14cm x 19cm, agrafées par années, pour 1965 à 1982. Feuilles rouges 21cm x 29.7cm, agrafées par mois, pour 1983 à 2003.

Example 5
Cartables rouges et bleus 23.7cm x 35.4cm, rassemblés par années, pour 1965 à 1982.

DataFormat
What is the format of the data files?

e.g. "CSV", "TEXT", "TIFF", "JPEG", "ESRI Grid", "ESRI ASCII Grid", "NetCDF", "Shapefile", "MID-MIF", "ESRI Interchange Files (E00)", etc...

If there are many formats, separate them with a comma. e.g. "TIFF, JPEG"
If the dataset is in paper form, write "Paper".

TypicalFileSize What is the typical size of a single downloadable zip file in the archive in MB?

e.g. "2" for 2MB

If most files are below 1MB, write "1".

If the dataset is in paper form, write "NA".

TotalArchiveSize What is the total size of the dataset, including all zipped files in the archive, in MB?

e.g. "123" for 123MB

If the dataset is in paper form, write "NA".

LicensingConditions Who can use it, how can it be used and does it require acknowledgements?

Write a brief description of the conditions of use under the license agreement.

e.g. "Only BEACONs people can use this dataset. Data should be deleted after use. Please acknowledge any use of the data in any report or publication produced."

e.g. "You have to acknowledge the producer if you use this dataset."

If there are no conditions, write "No conditions".

LicenseURL What is the URL of the web site where the license for this product is located?

e.g. "ftp://www.nofc.cfs.nrcan.gc.ca/README_CFS-ClimScen_Overview_v0.4.rtf"

If there is no restrictions, write "No license file".
If the license is not on a web site, write "Companion file". You should then include a PDF version of the license in the package.

DocumentedBy Who documented this dataset?

e.g. "Kim Lisgo"

DocumentedWhen When was this dataset documented?

It is important to enter the year, then the month, then the day, so everything sort well in Excell and in the main page.

e.g. "2009-03-24"


THIRD STEP - Upload your dataset package to the Foundry FTP site

Every dataset is archived at ftp://bdf.sbf.ulaval.ca/BOREALDATAFOUNDRY  in its own folder. This folder is in read only access and only the repository main manager can copy files in it. You must therefore first upload your package to the "PACKAGEUPLOAD" folder and then ask the repository main manager to copy it for you in the "BOREALDATAFOUNDRY" folder.

To add a dataset:

  1. upload the package you want to add to the "PACKAGEUPLOAD" folder of the Foundry FTP site,
  2. ask the site manager (for now Pierre Racine) to copy the dataset for you to the "BOREALDATAFOUNDRY" folder. He will double check that the dataset is well packaged and documented before coping it to the final "BOREALDATAFOUNDRY" folder for you.

Every dataset MUST be well documented BEFORE it is moved to the "BOREALDATAFOUNDRY" folder. This is to prevent archiving of undocumented (and therefore useless!) datasets.

********************************************************** *************************** FRQNT ************************ **********************************************************

Le CEF est un
regroupement stratégique du

********************************************************** *********************** Infolettre *********************** **********************************************************

Abonnez-vous à
l'Infolettre du CEF!

********************************************************** ********** Colloque Chaire UQAT UQAM en AFD *********** **********************************************************

********************************************************** ********** Colloque Foresterie Autochtone *************** **********************************************************

********************************************************** **************** Balcony Garden Project ****************** **********************************************************

********************************************************** ********* Mémoire CEF Changements Climatiques ************ **********************************************************

********************************************************** ***************** Pub - Symphonies_Boreales ****************** **********************************************************