HTTP API Reference

ACPype provides a user-friendly HTTP API that allows users to submit molecules in a variety of formats for processing using the ACPYPE software. Our API enables users to specify a wide range of options for processing, including charge method, net charge, and atom type. After submitting their molecule, users can query the result files generated by ACPYPE in the background using the hash ID provided by the server.

Online interactive Notebook: Open In Colab

  1. 1.A Submitting a ligand using SMILES representation
    1. Endpoint description
    2. cURL Example
    3. Python Example
  2. 1.B Submitting a ligand using files with coordinates
    1. Endpoint description
    2. cURL Example
    3. Python Example
  3. 2. Querying the submission status via Hash ID
    1. Endpoint description
    2. cURL Example
    3. Python Example
  4. 3. Fetching the submission results via Hash ID
    1. Endpoint description
    2. cURL Example
    3. Python Example

Workflow Summary

The API user submits a molecule in SMILES format or as a file with coordinates by making a POST request to the ACPYPE API. The server will create a job and return a hash_id. Once the API user has the hash_id, they can query the status of the job by making a GET request to the ACPYPE API. Eventually, the job will be completed and the API user can fetch the results by making a GET request to the ACPYPE API.

Colab

1.A Submitting a ligand using SMILES representation

Back to top

This endpoint allows you to submit a prediction job for your molecule of interest via HTTP POST request with your input data in JSON format.

API Endpoint

Back to top POST https://bio2byte.be/acs/api

Request Body Format

Back to top
{
    "inputFile":     "null",
    "file_name":     "String",
    "token":         "String (10 characters)",
    "charge_method": "String",
    "net_charge":    "String | null (it means auto)",
    "atom_type":     "String",
    "email":         "String | null",
    "smiles":        "String"
}

Request Body Fields

Back to top

The properties in the request body include:

  • inputFile: The input file for the ligand in this case will be null.
  • file_name: The title of the prediction job.
  • token: A 10-character string identifier related to the user.
  • charge_method: The charge calculation method ("bcc" or "gas").
  • net_charge: The net charge of the ligand (null or a specific value).
  • atom_type: The atom type parameter ("gaff", "gaff2", or "amber").
  • email: [Optional] The email address to receive notifications from the ACPYPE server.
  • smiles: The ligand to process in SMILES format.

Example:

{
    "inputFile":     null,
    "file_name":     "OXYGEN",
    "token":         "YOURTOKEN1",
    "charge_method": "bbc",
    "net_charge":    null,
    "atom_type":     "gaff",
    "email":         null,
    "smiles":        "O"
}

cURL Example

Back to top

Using cURL, you can submit a prediction job by making a POST request to the API endpoint URL, passing the request payload as JSON using the -d parameter.

curl --location 'https://bio2byte.be/acs/api' \
--header 'Content-Type: application/json' \
--data '{
    "inputFile":     null,
    "file_name":     "OXYGEN",
    "token":         "YOURTOKEN1",
    "charge_method": "bbc",
    "net_charge":    null,
    "atom_type":     "gaff",
    "email":         null,
    "smiles":        "O"
}'

Response Format

Back to top

Upon successful submission, the API will respond with a JSON object containing the following properties:

  • Location: The URL where you can check the queue and retrieve the results using the provided <hash_id>
  • hash_id: The unique identifier assigned to the prediction job. This <hash_id> will be used to fetch the results using the GET endpoint.
  • message:A descriptive message indicating that the request is being processed and to check the queue for updates on the processing status.

You can use the provided Location and hash_id to check the processing status and retrieve the results using the GET endpoint.

Example:

{
    "Location": "/api/queue/bDp0dfaW9knh7v868T9F/",
    "hash_id": "bDp0dfaW9knh7v868T9F",
    "message": "We are processing your request, check queue to be updated about the processing status"
}

Python example

Back to top

In this Python example, the requests library is used to make a POST request to the API endpoint URL, passing the request payload as JSON using the json parameter. The response is then processed based on the status code. If the status code is 200 (indicating a successful request), the JSON response is parsed, and the relevant properties (Location, hash_id, and message) are extracted and printed. Otherwise, an error message is printed along with the status code.

import requests
import json

url = "https://bio2byte.be/acs/api"
print("POST", url)

# Define request payload
payload = {
    "inputFile": None,
    "file_name": "OXYGEN",
    "token": "YOURTOKEN1",
    "charge_method": "bbc",
    "net_charge": None,
    "atom_type": "gaff",
    "email": None,
    "smiles": "O"
}

# Send POST request
response = requests.post(url, json=payload)

# Process the response
if response.status_code > 200 and response.status_code < 300:
    data = response.json()
    location = data["Location"]
    hash_id = data["hash_id"]
    message = data["message"]
    print("Prediction job submitted successfully!")
    print(f"Location: {location}")
    print(f"hash_id: {hash_id}")
    print(f"Message: {message}")
else:
    print("Failed to submit prediction job. Status Code:", response.status_code)

Output example:

POST https://bio2byte.be/acs/api
Prediction job submitted successfully!
Location: /api/queue/bDp0dfaW9knh7v868T9F/
hash_id: bDp0dfaW9knh7v868T9F
Message: We are processing your request, check queue to be updated about the processing status

1.B Submitting a ligand using files with coordinates

Back to top

This endpoint allows you to submit a prediction job for your molecule (up to 200 atoms) of interest via HTTP POST request with your input data in JSON format.

API Endpoint

Back to top POST https://bio2byte.be/acs/api

Request Body Format

Back to top
{
    "inputFile":     "String",
    "file_name":     "String",
    "token":         "String (10 characters)",
    "charge_method": "String",
    "net_charge":    "String | null (it means auto)",
    "atom_type":     "String",
    "email":         "String | null",
    "smiles":        "null"
}

Request Body Fields

Back to top

The properties in the request body include:

  • inputFile: The input file of the lingad as a single line string
  • file_name: The title of the prediction job, including the file extension of the input file.
  • token: A 10-character string identifier related to the user.
  • charge_method: The charge calculation method ("bcc" or "gas").
  • net_charge: The net charge of the ligand (null or a specific value).
  • atom_type: The atom type parameter ("gaff", "gaff2", or "amber").
  • email: [Optional] The email address to receive notifications from the ACPYPE server.
  • smiles: In this case this field is null.

Example:

{
  "inputFile": "REMARK OXYGEN_FROM_API2_NEW.pdb created by acpype (v: 2022.7.21) on Thu May 11 08:48:53 2023\nATOM      1    O HOH Z   1       1.073   0.058   0.025  1.00  0.00           O\nATOM      2    H HOH Z   1       0.794   0.044  -0.903  1.00  0.00           H\nATOM      3   H1 HOH Z   1       2.041   0.057  -0.021  1.00  0.00           H\nEND",
  "file_name": "OXYGEN_FROM_PDB_API.pdb",
  "token": "YOURTOKEN1",
  "charge_method": "bbc",
  "net_charge": null,
  "atom_type": "gaff",
  "email": null,
  "smiles": null
}

cURL Example

Back to top

Using cURL, you can submit a prediction job by making a POST request to the API endpoint URL, passing the request payload as JSON using the -d parameter.

curl --location 'https://bio2byte.be/acs/api' \
--header 'Content-Type: application/json' \
--data '{
    "inputFile": "REMARK OXYGEN_FROM_API2_NEW.pdb created by acpype (v: 2022.7.21) on Thu May 11 08:48:53 2023\nATOM      1    O HOH Z   1       1.073   0.058   0.025  1.00  0.00           O\nATOM      2    H HOH Z   1       0.794   0.044  -0.903  1.00  0.00           H\nATOM      3   H1 HOH Z   1       2.041   0.057  -0.021  1.00  0.00           H\nEND",
    "file_name":     "OXYGEN_FROM_PDB_API.pdb",
    "token":         "YOURTOKEN1",
    "charge_method": "bbc",
    "net_charge":    null,
    "atom_type":     "gaff",
    "email":         null,
    "smiles":        null
}'

Response Format

Back to top

Upon successful submission, the API will respond with a JSON object containing the following properties:

  • Location: The URL where you can check the queue and retrieve the results using the provided <hash_id>
  • hash_id: The unique identifier assigned to the prediction job. This <hash_id> will be used to fetch the results using the GET endpoint.
  • message:A descriptive message indicating that the request is being processed and to check the queue for updates on the processing status.

You can use the provided Location and hash_id to check the processing status and retrieve the results using the GET endpoint.

Example:

{
    "Location": "/api/queue/bDp0dfaW9knh7v868T9F/",
    "hash_id": "bDp0dfaW9knh7v868T9F",
    "message": "We are processing your request, check queue to be updated about the processing status"
}

Python example

Back to top

In this Python example, the requests library is used to make a POST request to the API endpoint URL, passing the request payload as JSON using the json parameter. The response is then processed based on the status code. If the status code is 200 (indicating a successful request), the JSON response is parsed, and the relevant properties (Location, hash_id, and message) are extracted and printed. Otherwise, an error message is printed along with the status code.

import requests
import json

url = "https://bio2byte.be/acs/api"
print("POST", url)

with open("OXYGEN.pdb", "rb") as input_file_handler:
    file_content = input_file_handler.read().decode("utf-8")

# Define request payload
payload = {
    "inputFile": file_content,
    "file_name": "OXYGEN.pdb",
    "token": "YOURTOKEN1",
    "charge_method": "bbc",
    "net_charge": None,
    "atom_type": "gaff",
    "email": None,
    "smiles": None
}

# Send POST request
response = requests.post(url, json=payload)

# Process the response
if response.status_code > 200 and response.status_code < 300:
    data = response.json()
    location = data["Location"]
    hash_id = data["hash_id"]
    message = data["message"]
    print("Prediction job submitted successfully!")
    print(f"Location: {location}")
    print(f"hash_id: {hash_id}")
    print(f"Message: {message}")
else:
    print("Failed to submit prediction job. Status Code:", response.status_code)

Output example:

POST https://bio2byte.be/acs/api
Prediction job submitted successfully!
Location: /api/queue/bDp0dfaW9knh7v868T9F/
hash_id: bDp0dfaW9knh7v868T9F
Message: We are processing your request, check queue to be updated about the processing status

2. Querying the submission status via Hash ID

Back to top

This endpoint allows you to query results by fetching a JSON response via HTTP GET request with a specified hash_id. The JSON response contains various details about the query and the results.

API Endpoint

Back to top GET https://bio2byte.be/acs/api/queue/<hash_id>/

The <hash_id> is a unique identifier associated with a specific submission that you want to retrieve results for. You need to know this <hash_id> to make a GET request to retrieve the results.

cURL Example

Back to top
curl --location 'https://bio2byte.be/acs/api/queue/<hash_id>/'

Response Format

Back to top

While the job is enqueued or being processed, the API will respond with 202 status code and a JSON object containing the following properties:

{
    "id": "Integer",
    "request_text": "String",
    "location": "String",
    "status": "Integer",
    "hash_id": "String",
    "public": "Boolean"
}
  • id: An identifier for the entry in the database.
  • request_text: Information related to the current status.
  • location:The URL to query the status of the prediction job.
  • status:The processing status.
  • hash_id: The unique hash identifier for the prediction job.
  • public:Whether the submission is listed on the public page on the ACPYPE website.

Example:

{
    "id": 10735,
    "request_text": "We are processing your request, there are still 0 processes",
    "location": "not_available",
    "status": 202,
    "hash_id": "bDp0dfaW9knh7v868T9F",
    "public": false
}

When the job is done, the API will respond with 200 status code and a JSON object containing the following properties:

{
    "id": "Integer",
    "creation_date": "String (ISO 8601 date-time format)",
    "token": "String",
    "hash_id": "String",
    "log": "String",
    "views": "Integer",
    "result_request": "Object",
    "results": "Array of Objects"
}
  • id: An identifier for the entry in the database.
  • creation_date: The date and time when the entry was created..
  • token:A 10-character string identifier associated with the user.
  • hash_id: The unique hash identifier for the prediction job.
  • log:A string containing the log or debug information for the prediction job.
  • views:The number of times the prediction job has been viewed on the website.
  • result_request:An object containing information about the request for the prediction job results.
  • results:A list of objects containing the individual results of the prediction job.

Example:

{
    "id": 9946,
    "creation_date": "2023-05-11T10:46:14.040432Z",
    "token": "RTV5GZHc5M",
    "hash_id": "bDp0dfaW9knh7v868T9F",
    "log": "...",
    "views": 21,
    "result_request": {
        ...
    },
    "results": [
        {}
    ]
}

Python example

Back to top

By following the provided code, you can retrieve the status and results of a job submitted. This functionality allows you to track the progress of your jobs and access the necessary data for further analysis and interpretation.

# Import the necessary libraries
import json
import requests

hash_id = "YOUR_HASH_ID"
url = f"https://bio2byte.be/acs/api/queue/{hash_id}/"
print("GET", url)

# Make the API request and store the JSON response in a variable
response = requests.get(url)

# Check the response status code
if response.status_code >= 400:
    print("Failed to fetch prediction job. Response status Code:", response.status_code)
elif response.status_code >= 202:
    print("Still processing your request. Please try again in a minute. Response status code: ", response.status_code)

    # Extract the JSON response
    json_response = response.json()
    print(json_response)
else:
    print("Response status code:", response.status_code)
    # Extract the JSON response
    json_response = response.json()

    id = json_response['id']
    creation_date = json_response['creation_date']
    token = json_response['token']
    hash_id = json_response['hash_id']
    result_request = json_response['result_request']
    location = result_request['location']
    status = result_request['status']

    print("Response fields:")
    print(f"id: {id}")
    print(f"creation_date: {creation_date}")
    print(f"token: {token}")
    print(f"hash_id: {hash_id}")
    print(f"location: {location}")
    print(f"status: {status}")

    print("You job files are available!")

Output example:

GET https://bio2byte.be/acs/api/queue/bDp0dfaW9knh7v868T9F/
Still processing your request. Please try again in a minute. Response status code:  202
{'id': 10748, 'request_text': 'We are processing your request, there are still 0 processes', 'location': 'not_available', 'status': 202, 'hash_id': 'bDp0dfaW9knh7v868T9F', 'public': False}

3. Fetching the submission results via Hash ID

Back to top

This endpoint allows you to query results by fetching a JSON response via HTTP GET request with a specified hash_id. The JSON response contains various details about the query and the results.

API Endpoint

Back to top GET https://bio2byte.be/acs/api/<hash_id>/

The <hash_id> is a unique identifier associated with a specific submission that you want to retrieve results for. You need to know this <hash_id> to make a GET request to retrieve the results.

cURL Example

Back to top
curl --location 'https://bio2byte.be/acs/api/<hash_id>/'

Response Format

{
  "id": "Integer",
  "creation_date": "String (ISO 8601 date-time format)",
  "token": "String",
  "hash_id": "String",
  "log": "String",
  "views": "Integer",
  "result_request": {
    "id": "Integer",
    "request_text": "String",
    "location": "String",
    "status": "Integer",
    "hash_id": "String",
    "public": "Boolean"
  },
  "results": [
    {
      "em_mdp": "String",
      "AC_frcmod": "String",
      "AC_inpcrd": "String",
      "AC_lib": "String",
      "AC_prmtop": "String",
      "mol2": "String",
      "CHARMM_inp": "String",
      "CHARMM_prm": "String",
      "CHARMM_rtf": "String",
      "CNS_inp": "String",
      "CNS_par": "String",
      "CNS_top": "String",
      "GMX_OPLS_itp": "String",
      "GMX_OPLS_top": "String",
      "GMX_gro": "String",
      "GMX_itp": "String",
      "GMX_top": "String",
      "NEW_pdb": "String",
      "md_mdp": "String"
    }
  ]
}

Response Fields

Back to top
  • id: ID of the submission.
  • creation_date: Date and time the submission was created.
  • token: Token for the submission.
  • hash_id: Unique identifier for the submission.
  • log: Log of the prediction execution.
  • views: Number of times the submission has been viewed.
  • result_request: Content of the different files generated by the prediction process.
  • em_mdp: Gromacs file - Molecular dynamics parameters
  • AC_frcmod: Amber file - Parameter modification file
  • AC_inpcrd: Amber file - Coordinate file specification
  • AC_lib: Amber file
  • AC_prmtop: Amber file - Parameter/topology file specification
  • mol2: Tripos Mol2 file - Represents a single or multiple chemical compounds
  • CHARMM_inp: Chemistry at HARvard Macromolecular Mechanics (CHARMM) file - Input file containing CHARMM input commands
  • CHARMM_prm: Chemistry at HARvard Macromolecular Mechanics (CHARMM) file - Parameter file
  • CHARMM_rtf: Chemistry at HARvard Macromolecular Mechanics (CHARMM) file - Topology file
  • CNS_inp: Crystallography & NMR System (CNS) file - CNSsolve task file
  • CNS_par: Crystallography & NMR System (CNS) file - CNSsolve task parameters
  • CNS_top: Crystallography & NMR System (CNS) file - CNSsolve task topology
  • GMX_OPLS_itp: Gromacs file - Force-field file
  • GMX_OPLS_top: Gromacs file - Topology file
  • GMX_gro: Gromacs file - molecular structure in Gromos87 format
  • GMX_itp: Gromacs file - the itp file extension stands for include topology
  • GMX_top: Gromacs file - ASCII file which is read by gmx grompp which processes it and creates a binary topology (tpr file)
  • NEW_pdb: Protein Data Bank (PDB) file - File format describing the three-dimensional structures of molecules
  • md_mdp: Gromacs file - Molecular dynamics parameters

Python example

Back to top

In this example, the requests module is used to make an HTTP GET request to the API endpoint URL, passing in the hash_id as a URL parameter. The response is then parsed as JSON using the json() method, and the results key is extracted using indexing. Then, a nested loop is used to loop through each result and each key-value pair in each result. For each value, a new file is created and saved to disk using the open() function and the write() method.

# Import the necessary libraries
import json
import requests

hash_id = "YOUR_HASH_ID"

# Make the API request and store the JSON response in a variable
url = f"https://bio2byte.be/acs/api/{hash_id}/"
print("GET", url)

response = requests.get(url)

# Check the response status code
if response.status_code >= 400:
    # Display an error message if the request fails
    print("Failed to fetch prediction job. Status Code:", response.status_code)
elif response.status_code >= 300:
    print("Still working on the prediction job, please try again in a minute. Status Code:", response.status_code)
else:
    # Extract the JSON response
    json_response = response.json()

    # Loop through each result and save it to a separate file
    for i, result in enumerate(json_response["results"], start=1):
        for key, value in result.items():

            # Save each result to a file
            print(f"Saving {key} content to result_{i}_{key}.txt")

            with open(f"result_{i}_{key}.txt", "w") as f:
                f.write(value)

    print("Files saved with success")

Output example:

GET https://bio2byte.be/acs/api/bDp0dfaW9knh7v868T9F/
Saving em_mdp content to result_1_em_mdp.txt
Saving AC_frcmod content to result_1_AC_frcmod.txt
Saving AC_inpcrd content to result_1_AC_inpcrd.txt
Saving AC_lib content to result_1_AC_lib.txt
Saving AC_prmtop content to result_1_AC_prmtop.txt
Saving mol2 content to result_1_mol2.txt
Saving CHARMM_inp content to result_1_CHARMM_inp.txt
Saving CHARMM_prm content to result_1_CHARMM_prm.txt
Saving CHARMM_rtf content to result_1_CHARMM_rtf.txt
Saving CNS_inp content to result_1_CNS_inp.txt
Saving CNS_par content to result_1_CNS_par.txt
Saving CNS_top content to result_1_CNS_top.txt
Saving GMX_OPLS_itp content to result_1_GMX_OPLS_itp.txt
Saving GMX_OPLS_top content to result_1_GMX_OPLS_top.txt
Saving GMX_gro content to result_1_GMX_gro.txt
Saving GMX_itp content to result_1_GMX_itp.txt
Saving GMX_top content to result_1_GMX_top.txt
Saving NEW_pdb content to result_1_NEW_pdb.txt
Saving md_mdp content to result_1_md_mdp.txt
Files saved with success