View on GitHub

DatAasee - A Metadatalake for Libraries

DatAasee centralizes and interlinks distributed library/research metadata into an API‑first union catalog.

DatAasee Software Documentation

Version: 0.5

DatAasee is a metadata-lake for centralizing bibliographic data and scientific metadata from different sources, to increase research data findability and discoverability, as well as metadata availability, and thus supporting FAIR research in university libraries, research libraries, academic libraries or scientific libraries.

In particular, DatAasee is developed for and by the University and State Library of Münster, but available openly under a free and open-source license.

Table of Contents:

  1. Explanations (learning-oriented)
  2. How-Tos (goal-oriented)
  3. References (information-oriented)
  4. Tutorials (understanding-oriented)
  5. Appendix (development-oriented)

Selected Subsections:


1. Explanations

In this section in-depth explanations and backgrounds are collected.

Overview:

About

Features

Features in brackets [ ] are under construction.

Components

DatAasee uses a three-tier architecture with these separately containered components and orchestrated by Compose:

Function Abstraction Tier Product
Metadata Catalog Multi-Model Database Data (Database) ArcadeDB
EtLT Processor Declarative Streaming Processor Logic (Backend) Benthos
Web Frontend Declarative Web Framework Presentation (Frontend) Lowdefy

Design

Data Model

The internal data model is based on the one big table (OBT) approach, but with the exception of linked enumerated dimensions (Look-Up tables) making it effectively a denormalized wide table with star schema. Practically, a graph database is used with a central node (vertex) type (cf. table) named metadata. The node properties are based on DataCite for the descriptive metadata.

EtLT Process

The ingest of metadata records follows typical data transport patterns. Combining the ETL (Extract-Transform-Load / schema-on-write) and ELT (Extract-Load-Transform / schema-on-read) concepts, processing is built upon an EtLT approach:

Particularly, this means “EtL” happens (batch-wise) during ingest, while “T” occurs when requested.

Persistence

Backup and Restore:

Security

Secrets:

Infrastructure:

Interface:


2. How-Tos

In this section, step-by-step guides for real-world problems are listed.

Overview:

Prerequisite

The (virtual) machine deploying DatAasee requires docker-compose on top of docker or podman, see also container engine compatibility.

Resources

The compute and memory resources for DatAasee can be configured via the compose.yaml. To run, a bare-metal machine or virtual machine requires:

In terms of DatAasee components this breaks down to:

Note, that resource and system requirements depend on load; especially the database is under heavy load during ingest. Post ingest, (new) metadata records are interrelated, also causing heavy database loads. Generally, the database drives the overall performance. Thus, to improve performance, try first to increase the memory limits (in the compose.yaml) for the database component (i.e. from 4G to 24G).

Using DatAasee

In this section the terms “operator” and “user” are utilized, where “operator” refers to the party installing, serving and maintaining DatAasee, and “user” refers to the individuals or services reading from DatAasee.

Operator Activities

User Activities

This means the user can only use the GET API endpoints, while the operator typically uses the POST API endpoints.

For details about the HTTP-API calls, see the API reference.

Deploy

Deploy DatAasee via (Docker) Compose by providing the two secrets:

for further details see the Getting Started tutorial as well as the compose.yaml and the Docker Compose file reference.

$ mkdir -p backup  # or: ln -s /path/to/backup/volume backup
$ wget https://raw.githubusercontent.com/ulbmuenster/dataasee/0.5/compose.yaml
$  DL_PASS=password1 DB_PASS=password2 docker compose up -d

NOTE: The backup folder (or mount) needs permissions to read from, and write into by the root (actually by the database container’s user, but root can represent them on the host). Thus, a change of ownership sudo chown root backup is typically required. For testing purposes chmod o+w backup is fine, but not recommended for production.

NOTE: The required secrets are kept in the temporary environment variables DL_PASS and DB_PASS; note the leading space in the line with docker compose, omitting it from the history.

NOTE: To further customize your deploy, use environment variables. The runtime configuration environment variables can be stored in an .env file.

WARNING: Do not put secrets into the .env file!

Logs

$ docker compose logs backend --no-log-prefix

NOTE: For better readability the log output can be piped through grep -E --color '^([^\s]*)\s|$' highlighting the text before the first whitespace, which corresponds to the log-level in the DatAasee logs.

Shutdown

$ docker compose down

NOTE: A (database) backup is automatically triggered on every shutdown.

Probe

For further details see /ready endpoint API reference entry.

wget -SqO- http://localhost:8343/api/v1/ready

NOTE: The default port for the HTTP API is 8343.

Ingest

For further details see /ingest endpoint API reference entry.

$ wget -qO- http://localhost:8343/api/v1/ingest --user admin --ask-password --post-data \
  '{"source":"https://my.url/to/oai","method":"oai-pmh","format":"mods","rights":"CC0","steward":"steward.data@metadata.source"}'

NOTE: This is an async action, progress and completion is only noted in the logs.

NOTE: The rights field should be a controlled term setting the access rights or license of the ingested metadata records.

NOTE: The steward field should be a URL or email, but can be any identification of a responsible person for the ingested source.

Backup (Manually)

For further details see /backup endpoint API reference entry.

$ wget -qO- http://localhost:8343/api/v1/backup --user admin --ask-password --post-data=

NOTE: A (database) backup is also automatically triggered after every ingest.

NOTE: A custom backup location can be specified via DL_BACKUP or inside the compose.yaml.

Update

$ docker compose pull
$  DL_PASS=password1 DB_PASS=password2 docker compose up -d

NOTE: “Update” means: if available, new images of the same DatAasee version but with updated dependencies will be installed, whereas “Upgrade” means: a new version of DatAasee will be installed.

NOTE: An update terminates an ongoing ingest or interconnect process.

Upgrade

$ docker compose down
$  DL_PASS=password1 DB_PASS=password2 DL_VERSION=0.5 docker compose up -d

NOTE: docker compose restart cannot be used here because environment variables (such as DL_VERSION) are not updated when using restart.

NOTE: Make sure to put the DL_VERSION variable into an .env file for a permanent upgrade or edit the compose file.

Reset

$ docker compose restart

NOTE: A reset may become necessary if, for example, the backend crashes during an ingest; a database backup is created during a reset, too.

Web Interface (Prototype)

All frontend pages show a menu on the left side listing all other pages, as well as an indicator if the backend server is ready.

NOTE: The default port for the web frontend is 8000, i.e. http://localhost:8000, and can be adapted in the compose.yaml.


Home Screenshot

The home page has a full-text search input.


Resolve Screenshot

The “DOI Search” page takes a DOI and returns the associated metadata record.


List Screenshot

The “List Records” page allows to list all metadata records from a selected source.


Filter Screenshot

The “Filter Search” page allows to filter for a sub set of metadata records by categories, resource types, languages, licenses, or publication year.


Query Screenshot

The “Custom Query” page allows to enter a query via sql, cypher, gremlin, graphql, or mql.


Statistics Screenshot

The “Statistics Overview” page shows top-10 bar graphs for number of views, publication years, and keywords, as well as top-100 pie charts for resource types, categories, licenses, subjects, languages, and metadata schemas.


About Screenshot

The “Interface Summary” page lists the backend API endpoints and provides links to parameter, request, and response schemas.


Fetch Screenshot

The “Display Record” page presents a single metadata record.


Insert Screenshot

The “Insert Record” page allows to insert a single metadata record.


Admin Screenshot

The “Admin Controls” page allows to trigger actions in the backend like backup database, ingest source, or health check.

API Indexing

Add the JSON object below to the apis array in your global apis.json:

{
  "name": "DatAasee API",
  "description": "The DatAasee API enables research data search and discovery via metadata",
  "keywords": ["Metadata"],
  "attribution": "DatAasee",
  "baseURL": "http://your-dataasee.url/api/v1",
  "properties": [
    {
      "type": "InterfaceLicense",
      "url": "https://creativecommons.org/licenses/by/4.0/"
    },
    {
      "type": "x-openapi",
      "url": "http://your-dataasee.url/api/v1/api"
    }
  ]
}

For FAIRiCat, add the JSON object below to the linkset array:

{
  "anchor": "http://your-dataasee.url/api/v1",
  "service-doc": [
    {
      "href": "http://your-dataasee.url/api/v1/api",
      "type": "application/json",
      "title": "DatAasee API"
    }
  ]
}

3. References

In this section technical descriptions are summarized.

Overview:

Runtime Configuration

The following environment variables affect DatAasee if set before starting:

Symbol Value Meaning
TZ CET (Default) Timezone of all component servers
DL_PASS password1 (Example) DatAasee password (use only command local!)
DB_PASS password2 (Example) Database password (use only command local!)
DL_VERSION 0.5 (Example) Requested DatAasee version
DL_BACKUP $PWD/backup (Default) Path to backup folder
DL_USER admin (Default) DatAasee admin username
DL_BASE http://my.url (Example) Outward DatAasee base URL (including protocol and port, but no trailing slash)
DL_PORT 8343 (Default) DatAasee API port
FE_PORT 8000 (Default) Web Frontend port

HTTP-API

The HTTP-API is served under http://<your-base-url>:port/api/v1 (see DL_PORT and DL_BASE) and provides the following endpoints:

Method Endpoint Type Summary
GET /ready system Returns service readiness
GET /api system Returns API specification and schemas
GET /schema support Returns database schema
GET /enums support Returns enumerated property values
GET /stats support Returns metadata record statistics
GET /sources support Returns ingested metadata sources
GET /metadata data Returns queried metadata record(s)
POST /ingest system Triggers async ingest of metadata records from source.
POST /insert system Inserts single metadata record (discouraged)
POST /backup system Triggers database backup
POST /health system Probes and returns service liveness

For more details see also the associated OpenAPI definition. Furthermore, parameters, request and response bodies are specified as JSON-Schemas, which are linked in the respective endpoint entries below.

NOTE: The default base path for all endpoints is /api/v1.

NOTE: All GET requests are unchallenged, all POST requests are challenged, which are handled via “Basic Authentication”, where the user name is admin (by default, or was set via DL_USER), and the password was set via DL_PASS.

NOTE: All request and response bodies have content type JSON, thus, if provided, the Content-Type HTTP header must be application/json or application/vnd.api+json!

NOTE: Responses follow the JSON:API format, with the exception of the /api endpoint.

NOTE: The id property, in a response’s data property, is the server’s Unix timestamp.


/ready Endpoint

Returns a boolean answering if service is ready.

This endpoint is meant for readiness checks by an orchestrator, monitoring or in a frontend.

NOTE: The ready endpoint can be used as readiness probe.

NOTE: Internally, the overall readiness consists of the backend server AND database server readiness.

Status:

Example:

Get service readiness:

$ wget -qO- http://localhost:8343/api/v1/ready

/api Endpoint

Returns OpenAPI specification (without parameters), or parameter, request and response schemas (with respective parameter).

This endpoint documents the HTTP API as a whole as well as parameter, request, and response JSON schemas for all endpoints, and helps navigating the API for humans and machines.

NOTE: In case of a successful request, the response is NOT in the JSON:API format, but the requested JSON file directly.

Statuses:

Examples:

Get OpenAPI definition:

$ wget -qO- http://localhost:8343/api/v1/api

Get enums endpoint parameter schema:

$ wget -qO- http://localhost:8343/api/v1/api?params=enums

Get ingest endpoint request schema:

$ wget -qO- http://localhost:8343/api/v1/api?request=ingest

Get metadata endpoint response schema:

$ wget -qO- http://localhost:8343/api/v1/api?response=metadata

/schema Endpoint

Returns internal metadata schema.

This endpoint provides the hierarchy of the data model as well as labels and descriptions for all properties, and is meant for labels, hints or tooltips in a frontend.

NOTE: Keys prefixed with @ refer to meta information (schema version, type comment) or edge type names.

Statuses:

Example:

Get native metadata schema:

$ wget -qO- http://localhost:8343/api/v1/schema

/enums Endpoint

Returns lists of enumerated property values.

This endpoint returns lists of possible values for the categories, languages, licenses, relations, resourceTypes, schemas properties, as well as suggested values for the name sub-property of the externalLinks, synonyms properties, for frontends and query design.

Statuses:

Example:

Get all enumerated properties:

$ wget -qO- http://localhost:8343/api/v1/enums

Get “languages” property values:

$ wget -qO- http://localhost:8343/api/v1/enums?type=languages

/stats Endpoint

Returns statistics about records.

The returned Top-10 viewed records, occurring publication years, and keywords, as well as the Top-100 resource types, categories, licenses, subjects, languages, and metadata schemas, are meant for frontend dashboards and serve as an example for querying statistics.

Statuses:

Example:

Get record statistics:

$ wget -qO- http://localhost:8343/api/v1/stats

/sources Endpoint

Returns ingested sources.

This endpoint is meant for downstream services to obtain the ingested sources, which in a subsequent query can be used to filter by source.

Statuses:

Example:

Get ingested sources:

$ wget -qO- http://localhost:8343/api/v1/sources

/metadata Endpoint

Fetches, searches, filters or queries metadata record(s). Five modes of operation are available:

Paging via page is supported only for the source query and the combined full-text and filter search, sorting via newest only for the latter.

This is the main endpoint serving the metadata data of the DatAasee database.

NOTE: Only idem-potent read operations are permitted in custom queries.

NOTE: This endpoint’s responses include pagination links, where applicable.

NOTE: For requests with id at most one result is returned. For requests with query or source at most one-hundred results are returned per page. Other requests return at most twenty results per page.

NOTE: An explicitly empty source parameter (i.e. source=) implies all sources.

NOTE: A full-text search always matches for all argument terms (AND-based) in titles, descriptions and keywords in any order, while accepting * as wildcards and _ to build phrases, for example: I_*_a_dream.

NOTE: The type in a BibJSON export is renamed entrytype due to a collision with JSON API constraints.

NOTE: The id=dataasee is a special record in the backend for testing purposes; it is not stored in the database.

Statuses:

Examples:

Get record by record identifier:

$ wget -qO- http://localhost:8343/api/v1/metadata?id=dataasee

Export record in given format:

$ wget -qO- http://localhost:8343/api/v1/metadata?id=dataasee&format=datacite

Search records by single filter:

$ wget -qO- http://localhost:8343/api/v1/metadata?language=chinese

Search records by multiple filters:

$ wget -qO- http://localhost:8343/api/v1/metadata?resourcetype=book&language=german

Search records by full-text for word “History”:

$ wget -qO- http://localhost:8343/api/v1/metadata?search=History

Search records by full-text and filter, oldest first:

$ wget -qO- http://localhost:8343/api/v1/metadata?search=Geschichte&resourcetype=book&language=german&newest=false

Search records by custom SQL query:

$ wget -qO- http://localhost:8343/api/v1/metadata?language=sql&query=SELECT%20FROM%20metadata%20LIMIT%2010

List the second page of records from all sources:

$ wget -qO- http://localhost:8343/api/v1/metadata?source=&page=1

/ingest Endpoint

Triggers async ingest of metadata records from source.

This endpoint is the principal way to transport (meta-)data into DatAasee.

NOTE: This is an async action, so the response just reports if an ingest was started. Completion is noted in the backend logs and the subsequent interconnect in the database logs.

NOTE: To test if the server is busy, send an empty (POST) body to this endpoint.

NOTE: The method and format parameters are case-sensitive.

NOTE: The options field follows the selective harvesting in OAI-PMH, For example, admissible values are for example from=2000-01-01 or set=institution&from=2000-01-01

Status:

Example:

Check if server is busy ingesting:

$ wget -qO- http://localhost:8343/api/v1/ingest --user admin --ask-password --post-data=''

Start ingest from a given source:

$ wget -qO- http://localhost:8343/api/v1/ingest --user admin --ask-password --post-data \
  '{"source":"https://datastore.uni-muenster.de/oai","method":"oai-pmh","format":"datacite","rights":"CC0","steward":"forschungsdaten@uni-muenster.de"}'

/insert Endpoint

Inserts and parses, if necessary, a new record into the database.

This endpoint allows to manually insert a metadata record, however, this functionality is meant for testing and corner cases which cannot be ingested, for example, a report about a DatAasee instance.

NOTE: Generally, the usage of this endpoint is discouraged.

NOTE: Enumerated properties in the body (resourceType, language, license, categories) set only values if found in a controlled vocabulary. The respective enumerations can be obtained via the enums endpoint.

Status:

Example:

Insert record with given fields: TODO:

$ wget -qO- http://localhost:8343/api/v1/insert --user admin --ask-password --post-file=myinsert.json

/backup Endpoint

Triggers a database backup.

This endpoint is meant to create an on-demand backup in addition to the on-shutdown and after-ingest backups, to back up usage data.

NOTE: The backup location on the host can be set through the DL_BACKUP environment variable.

NOTE: In case a backup request times out, likely a backup takes longer than expected.

Status:

Example:

Back up the database and thus all state of DatAasee:

$ wget -qO- http://localhost:8343/api/v1/backup --user admin --ask-password --post-data=''

/health Endpoint

Returns internal status and versions of service components.

This endpoint is meant for liveness checks by an orchestrator, observability or for manually inspecting the database and processor health.

NOTE: The health endpoint can be used as liveness probe.

Status:

Example:

Get service health:

$ wget -qO- http://localhost:8343/api/v1/health --user admin --ask-password --post-data=''

Ingest Protocols

Ingest Encodings

Currently, XML (eXtensible Markup Language) is the sole encoding for ingested metadata, with the exception of ingesting via the DatAasee protocol, which uses JSON (JavaScript Object Notation).

Ingest Formats

Native Schema

The underlying DBMS (ArcadeDB) is a property-graph database of nodes (vertexes) and edges being documents (resembling JSON files). The graph nature is utilized by interconnecting records (vertex documents) via identifiers (i.e. DOI) during ingest, given a set of predefined relations.

Conceptually, the data model for metadata records has five sections:

The central type of the metadatalake database is the metadata vertex type, with the following properties:

Key Section Entry Internal Type Constraints Comment
schemaVersion Process Automatic Integer =1  
recordId Process Automatic String max 31 Hash (XXH64) of: source, format, sourceId/publisher, publicationYear, name
metadataFormat Process Automatic String max 255  
metadataQuality Process Automatic String max 255 Currently one of: “Incomplete”, “OK”
dataSteward process Automatic String max 4095  
source Process Automatic Link(pair) sources  
sourceRights Process Automatic String max 4095  
createdAt Process Automatic Datetime    
           
sizeBytes Technical Optional Integer min 0  
dataFormat Technical Optional String max 255  
dataLocation Technical Optional String max 4095, URL regex  
           
numberViews Social Automatic Integer min 0  
keywords Social Optional String max 255 Full-text indexed, comma separated
categories Social Optional List(String) max 4 Pass array of strings to API, returned as array of strings form API
           
name Descriptive Mandatory String max 255 Full-text indexed, title
creators Descriptive Mandatory List(pair) max 255 Pass array of pair objects (name, identifier) to API
publisher Descriptive Mandatory String max 255  
publicationYear Descriptive Mandatory Integer min -9999, max 9999  
resourceType Descriptive Mandatory Link(pair) resourceTypes Pass string to API, returned as string from API
identifiers Descriptive Mandatory List(pair) max 255 Pass array of pair objects (type, identifier) to API
           
synonyms Descriptive Optional List(pair) max 255 Pass array of pair objects (type, title) to API
language Descriptive Optional Link(pair) languages Pass string to API, returned as string from API
subjects Descriptive Optional List(pair) max 255 Pass array of pair objects (name, identifier) to API
version Descriptive Optional String max 255  
license Descriptive Optional Link(pair) licenses Pass string to API, returned as string from API
rights Descriptive Optional String max 65535  
fundings Descriptive Optional List(pair) max 255 Pass array of pair objects (project, funder) to API
description Descriptive Optional String max 65535 Full-text indexed
externalItems Descriptive Optional List(pair) max 255 Pass array of pair objects (type, URL) to API
           
rawMetadata Raw Optional String max 262144 Larger raw data is discarded
rawChecksum Raw Automatic String max 255 Hash (MD5) of: rawMetadata

NOTE: See also the custom queries section and the schema diagram: schema.md.

NOTE: The properties related, selfies, visited are only for internal purposes and hence not listed here.

NOTE: The preloaded set of categories (see categories.csv) is highly opinionated.

Global Metadata

The metadata type has the custom metadata fields:

Key Type Comment
version Integer Internal schema version (to compare against the schemaVersion property)
comment String Database comment

Property Metadata

Each schema property has a label, additionally the descriptive properties have a comment property.

Key Type Comment
label String For UI labels
comment String For UI helper texts

pair Documents

A helper document type used for source, creators, identifiers, synonyms, language, subjects, license, fundings, externalItems link targets or list elements.

Property Type Constraints
name String min 1, max 255
data String max 4095, URL regex

NOTE: The URL regex is based on stephenhay’s pattern.

Interrelation Edges

Type Comment
isRelatedTo Generic catch-all edge type and base type for all other edge types
isNewVersionOf See DataCite
isDerivedFrom See DataCite
hasPart See DataCite
isPartOf See DataCite
isDescribedBy See DataCite
commonExpression See OpenWEMI
commonManifestation See OpenWEMI

NOTE: The graph is directed, so the edge names have a direction. By default, the edge name refers to the outbound direction.

Edge Metadata

Key Type Comment
label String For UI labels (outbound edge)
altlabel String For UI labels (incoming edge)

Ingestable to Native Schema Crosswalk

TODO:

DatAasee DataCite DC LIDO MARC MODS
name titles.title title descriptiveMetadata.objectIdentificationWrap.titleWrap.titleSet 245, 130 titleInfo, part
creators creators.creator creator descriptiveMetadata.eventWrap.eventSet 100, 700 name, relatedItem
publisher publisher publisher descriptiveMetadata.objectIdentificationWrap.repositoryWrap.repositorySet 260, 264 originInfo
publicationYear publicationYear date descriptiveMetadata.eventWrap.eventSet 260, 264 originInfo, part
resourceType resourceType type category (TODO) 007, 337 genre, typeOfResource
identifiers identifier, alternateIdentifiers.alternateIdentifier identifier lidoRecID, objectPublishedID 001, 003, 020, 024, 856 identifier, recordInfo
synonyms titles.title title descriptiveMetadata.objectIdentificationWrap.titleWrap.titleSet 210, 222, 240, 242, 243, 246, 247 titleInfo
language language language descriptiveMetadata.objectClassificationWrap.classificationWrap.classification 008, 041 language
subjects subjects.subject subject descriptiveMetadata.objectRelationWrap.subjectWrap.subjectSet, descriptiveMetadata.objectClassificationWrap.classificationWrap.classification 655, 689 subject, classification
version version   descriptiveMetadata.objectIdentificationWrap.displayStateEditionWrap.displayEdition 250 originInfo
license rightsList.rights   administrativeMetadata.rightsWorkWrap.rightsWorkSet 506, 540 accessCondition
rights rightsList.rights rights administrativeMetadata.rightsWorkWrap.rightsWorkSet 506, 540 accessCondition
fundings fundingReferences.fundingReference        
description descriptions.description description descriptiveMetadata.objectIdentificationWrap.objectDescriptionWrap.objectDescriptionSet 500, 520 abstract
externalItems relatedIdentifiers.relatedIdentifier related descriptiveMetadata.objectRelationWrap.relatedWorksWrap.relatedWorkSet 856 relatedItem
           
keywords subjects.subject subject descriptiveMetadata.objectIdentificationWrap.objectDescriptionWrap.objectDescriptionSet 653 subject, classification
           
dataLocation identifier     856 location
dataFormat formats.format format      
sizeBytes          
           
isRelatedTo relatedItems.relatedItem, relatedIdentifiers.relatedIdentifier related descriptiveMetadata.objectRelationWrap.relatedWorksWrap.relatedWorkSet 780, 785, 786, 787 relatedItem
isNewVersionOf relatedItems.relatedItem, relatedIdentifiers.relatedIdentifier       relatedItem
isDerivedFrom relatedItems.relatedItem, relatedIdentifiers.relatedIdentifier       relatedItem
isPartOf relatedItems.relatedItem, relatedIdentifiers.relatedIdentifier     773 relatedItem
hasPart relatedItems.relatedItem, relatedIdentifiers.relatedIdentifier       relatedItem
CommonExpression         relatedItem
CommonManifestation identifier, alternateIdentifiers.alternateIdentifier identifier lidoRecID, objectPublishedID 001, 003, 020, 024, 856 identifier, recordInfo

Query Languages

Language Identifier Tutorial Documentation
SQL sql here ArcadeDB SQL
Cypher cypher here Neo4J Cypher
GraphQL graphql here GraphQL Spec
Gremlin gremlin here Tinkerpop Gremlin
MQL mongo here Mongo MQL
       
SPARQL sparql (WIP) SPARQL

4. Tutorials

In this section lessons for new-comers are given.

Overview:

Getting Started

  1. Setup compatible compose orchestrator
  2. Download DatAasee release
     $ wget https://raw.githubusercontent.com/ulbmuenster/dataasee/0.5/compose.yaml
    

    or:

     $ curl https://raw.githubusercontent.com/ulbmuenster/dataasee/0.5/compose.yaml
    
  3. Create or mount folder for backups (assuming your backup volume is mounted under /backup on the host in case of mount)
     $ mkdir -p backup
    

    or:

     $ ln -s /backup backup
    
  4. Ensure the backup location has the necessary permissions:
     $ chmod o+w backup  # For testing
    

    or:

     $ sudo chown root backup  # For deploying
    
  5. Start DatAasee service: “␣” denotes a space, which in front of a shell command causes it to be omitted from the history.
     $ ␣DB_PASS=password1 DL_PASS=password2 docker compose up -d
    

    or:

     $ ␣DB_PASS=password1 DL_PASS=password2 podman compose up -d
    

Now, if started locally, point a browser to http://localhost:8000 to use the web frontend, or send requests to http://localhost:8343/api/v1/ to use the HTTP API directly, for example via wget, curl.

Example Ingest

For demonstration purposes the collection of the “Directory of Open Access Journals” (DOAJ) is ingested. An ingest has four phases: First, the operator needs to collect the necessary information of the metadata source, i.e. URL, protocol, format, and data steward. Second, the ingest is triggered via the HTTP-API. Third, the backend ingests the metadata records from the source to the database. Fourth and lastly, the ingested data is interconnected inside the database.

  1. Check the “Directory of Open Access Journals” (in a browser) for a compatible ingest method:
     https://doaj.org
    

    The oai-pmh protocol is available.

  2. Check the documentation about OAI-PMH for the corresponding endpoint:
     https://doaj.org/docs/oai-pmh/
    

    The OAI-PMH endpoint URL is: https://doaj.org/oai.

  3. Check the OAI-PMH endpoint for available metadata formats (for example, in a browser):
     https://doaj.org/oai?verb=ListMetadataFormats
    

    A compatible metadata format is oai_dc.

  4. Trigger the ingest:
     $ wget -qO- http://localhost:8343/api/v1/ingest --user admin --ask-password --post-data \
       '{"source":"https://doaj.org/oai", "method":"oai-pmh", "format":"oai_dc", "rights":"CC0", "steward":"helpdesk@doaj.org"}'
    

    A status 202 confirms the start of the ingest. Here, no steward is listed in the DOAJ documentation, thus a general contact is set. Alternatively, the “Ingest” form of the “Admin” page in the web frontend can be used.

  5. DatAasee reports the start of the ingest in the backend logs:
     $ docker logs dataasee-backend-1
    

    with a message akin to: Ingest started from https://doaj.org/oai via oai-pmh as oai_dc..

  6. DatAasee reports completion of the ingest in the backend logs:
     $ docker logs dataasee-backend-1
    

    with a message akin to: Ingest completed from https://doaj.org/oai of 22133 records (of which 0 failed) after 0.1h..

  7. DatAasee starts interconnecting the ingested metadata records:
     $ docker logs dataasee-database-1
    

    with the message: Interconnect Started!.

  8. DatAasee finishes interconnecting the ingested metadata records:
     $ docker logs dataasee-database-1
    

    with the message: Interconnect Completed!.

NOTE: The interconnect is a potentially long-running, asynchronous operation, whose status is only reported in the database logs.

NOTE: Generally, the ingest methods OAI-PMH for suitable sources, S3 for multi-file sources, and GET for single-file sources should be used.

Example Harvest

A typical use-case for DatAasee is to forward all metadata records from a specific source. To demonstrate this, the previous Example Ingest is assumed to have happened.

  1. Check the ingested sources
     $ wget -qO- http://localhost:8343/api/v1/sources
    
  2. Request the first set of metadata records from source https://doaj.org/oai (the source needs to be URL encoded):
     $ wget -qO- http://localhost:8343/api/v1/metadata?source=https%3A%2F%2Fdoaj.org%2Foai
    

    At most 100 records are returned. For the first page, also the parameter page=0 may be used.

  3. Request the next set of metadata records via pagination:
     $ wget -qO- http://localhost:8343/api/v1/metadata?source=https%3A%2F%2Fdoaj.org%2Foai&page=1
    

    The last page can contain less than 100 records, all pages before contain 100 records.

NOTE: Using the source filter, the full record is returned, instead of a search result when used without, see /metadata

NOTE: The records are returned in no ordered which can be relied upon, thus assume no order.

Secret Management

Two secrets need to be managed for DatAasee, the database root password and the backend admin password. To protect these secrets on a host running docker(-compose), for example, the following tools can be used:

sops

$ printf "DL_PASS=password1\nDB_PASS=password2" > secrets.env
$ sops encrypt -i secrets.env
$ sops exec-env secrets.env 'docker compose up -d'

NOTE: For testing use gpg --full-generate-key and pass SOPS_PGP_FP.

consul & envconsul

$ consul kv put dataasee/DL_PASS password1
$ consul kv put dataasee/DB_PASS password2
$ envconsul -prefix dataasee docker compose up -d

NOTE: For testing use consul agent -dev.

env-vault

$ EDITOR=nano env-vault create secrets.env
$ env-vault secrets.env docker compose -- up -d

openssl

$  printf "DL_PASS=password1\nDB_PASS=password2" | openssl aes-256-cbc -e -a -salt -pbkdf2 -in - -out secrets.enc
$ (openssl aes-256-cbc -d -a -pbkdf2 -in secrets.enc -out secrets.env; docker compose --env-file .env --env-file secrets.env up -d; rm secrets.env)

Container Engines

DatAasee is deployed via a compose.yaml (see How to deploy), which is compatible to the following container and orchestration tools:

Docker-Compose (Docker)

Installation see: docs.docker.com/compose/install/

$  DB_PASS=password1 DL_PASS=password2 docker compose up -d
$ docker compose ps
$ docker compose down

Docker-Compose (Podman)

Installation see: podman-desktop.io/docs/compose/setting-up-compose

NOTE: See also the podman compose manpage.

NOTE: Alternatively the package podman-docker (on Ubuntu) can be used to emulate docker through podman.

NOTE: The compose implementation podman-compose is not compatible at the moment.

$  DB_PASS=password1 DL_PASS=password2 podman compose up -d
$ podman compose ps
$ podman compose down

Kompose (Minikube)

Installation see: kompose.io/installation/

Rename the compose.yaml to compose.txt and run:

$ kompose -f compose.txt convert
$ minikube start
$ kubectl create secret generic dataasee --from-literal=database=password1 --from-literal=datalake=password2
$ kubectl apply -f .
$ kubectl port-forward service/backend 8343:8343  # now the backend can be accessed via `http://localhost:8343/api/v1`

NOTE: Due to a Kubernetes/Next.js issue this does currently not work for the frontend.

$ minikube stop

Container Probes

The following endpoints are available for monitoring the respective containers; here the compose.yaml host names (service names) are used. Logs are written to the standard output.

Backend

Ready:

http://backend:4195/ready

returns HTTP status 200 if ready, see also Connect ready.

Liveness:

http://backend:4195/ping

returns HTTP status 200 if live, see also Connect ping.

Database

Ready:

http://database:2480/api/v1/ready

returns HTTP status 204 if ready, see also ArcadeDB ready.

Liveness:

http://database:2480/api/v1/exists/metadatalake

returns HTTP status 200 if live.

Frontend

Ready:

http://frontend:8000

returns HTTP status 200 if ready.

Custom Queries

Custom queries are meant for downstream services to customize recurring data access. Overall, the DatAasee database schema is based around the metadata vertex type, which corresponds to a one-big-table (OBT) pattern in relational terms. See the schema reference as well as the schema overview for the data model.

NOTE: All custom query results are limited to 100 items per request; use respective paging.

SQL

DatAasee uses the ArcadeDB SQL dialect (sql). For custom SQL queries, only single, read-only queries are admissible, meaning:

The vertex type (cf. table) holding the metadata records is named metadata.

Examples:

Get the schema:

SELECT FROM schema:types

Get (at most) the first one-hundred metadata record titles:

SELECT name FROM metadata

Gremlin

DatAasee supports a subset of Gremlin (gremlin).

Examples:

Get (at most) the first one-hundred metadata record titles:

g.V().hasLabel("metadata")

Cypher

DatAasee supports a subset of OpenCypher (cypher). For custom Cypher queries, only read-queries are admissible, meaning:

Examples:

Get labels:

MATCH (n) RETURN DISTINCT labels(n)

Get one-hundred metadata record titles:

MATCH (m:metadata) RETURN m

MQL

DatAasee supports a subset of a MQL (mongo) as JSON queries.

Examples:

Get (at most) the first one-hundred metadata record titles:

{ 'collection': 'metadata', 'query': { } }

GraphQL

DatAasee supports a subset of a GraphQL (graphql). GraphQL use requires some prior setup:

  1. A corresponding GraphQL type for the native metadata type needs to be defined:
     type metadata { id: ID! }
    
  2. Some GraphQL query needs to be defined, for example named getMetadata:
     type Query { getMetadata(id: ID!): [metadata!]! }
    

Since GraphQL type and query declarations are ephemeral, declarations and query execution should be send together.

Examples

Get (at most) the first one-hundred metadata record titles:

type metadata { id: ID! }

type Query { getMetadata(id: ID!): [metadata!]! }

{ getMetadata { name } }

SPARQL

TODO:

Custom Frontend

Remove Prototype Frontend

Remove the YAML object "frontend" in the compose.yaml (all lines below ## Frontend # ...).


5. Appendix

In this section development-related guidelines are gathered.

Overview:

Dependency Docs:

Development Decision Rationales:

Usage

Infrastructure

Data Model

Database

Backend

Frontend

Development Workflows

Development Setup

  1. git clone https://github.com/ulbmuenster/dataasee && cd dataasee (clone repository)
  2. make setup (builds container images locally)
  3. make start (starts development setup)

Release Builds

Compose Setup

Dependency Updates

  1. Dependency listing
  2. Dependency versions
  3. Version verification (Frontend only)

Schema Changes

  1. Schema definition
  2. Schema documentation
  3. Schema implementation

API Changes

  1. API architecture
  2. API documentation
  3. API implementation
  4. API rendering
  5. API testing

Dev Monitoring

Coding Standards

Release Management