Package 'prismaid'

Title: Open Science AI Tools for Systematic, Protocol-Based Literature Reviews
Description: prismAId uses generative AI models to extract data from scientific literature. It offers simple-to-use, efficient, and replicable methods for analyzing literature when conducting systematic reviews.
Authors: Riccardo Boero [aut, cre]
Maintainer: Riccardo Boero <[email protected]>
License: AGPL (>= 3)
Version: 0.6.7
Built: 2025-03-09 06:12:14 UTC
Source: https://github.com/open-and-sustainable/prismaid

Help Index


Run Review

Description

The input data must be structured in a TOML format, consisting of several sections and parameters.

Usage

RunReview(input_string)

Arguments

input_string

A string representing the input data.

Details

This function interfaces with a shared library to perform a review process on the input data.

[project]

  • name: A string representing the project title. Example: "Use of LLM for systematic review".

  • author: The name of the project author. Example: "John Doe".

  • version: The version number for the project configuration. Example: "1.0".

[project.configuration]

  • input_directory: The file path to the directory containing manuscripts to be reviewed. Example: "/path/to/txt/files".

  • input_conversion: Specifies manuscript conversion formats:

    • "": Default, non-active conversion.

    • "pdf", "docx", "html", or combinations like "pdf,docx".

  • results_file_name: The path and base name for saving results (file extension will be added automatically). Example: "/path/to/save/results".

  • output_format: The format for output results. Options: "csv" (default) or "json".

  • log_level: Determines logging verbosity:

    • "low": Minimal logging (default).

    • "medium": Logs to standard output.

    • "high": Logs to a file (see user manual for details).

  • duplication: Runs model queries twice for debugging purposes. Options: "yes" or "no" (default).

  • cot_justification: Requests chain-of-thought justification from the model. Options: "yes" or "no" (default).

  • summary: Generates and saves summaries of manuscripts. Options: "yes" or "no" (default).

[project.zotero]

  • user: Your Zotero user ID, which can be found by visiting [Zotero Settings](https://www.zotero.org/settings).

  • api_key: API key for Zotero.

  • group: This is the name of the collection or group containing the document to review, with nesting represented as a path, e.g. "parent/collection".

[project.llm]

  • Configuration for LLMs, supporting multiple providers for ensemble reviews.

  • Parameters include:

    • provider: The LLM service provider. Options: "OpenAI", "GoogleAI", "Cohere", or "Anthropic".

    • api_key: API key for the provider. If empty, environment variables will be checked.

    • model: Model name. Options vary by provider:

      • OpenAI: "gpt-3.5-turbo", "gpt-4-turbo", "gpt-4o", "gpt-4o-mini", or "" (default).

      • GoogleAI: "gemini-1.5-flash", "gemini-1.5-pro", "gemini-1.0-pro", or "" (default).

      • Cohere: "command-r7b-12-2024", "command-r-plus", "command-r", "command-light", "command", or "" (default).

      • Anthropic: "claude-3-5-sonnet", "claude-3-opus", "claude-3-sonnet", "claude-3-haiku", or "" (default).

    • temperature: Controls model randomness. Range: 0 to 1 (or 0 to 2 for GoogleAI). Lower values reduce randomness.

    • tpm_limit: Tokens per minute limit before delaying prompts. Default: 0 (no delay).

    • rpm_limit: Requests per minute limit before delaying prompts. Default: 0 (no delay).

[prompt]

  • Defines the main components of the prompt for reviews.

  • persona: Optional text specifying the model's role. Example: "You are an experienced scientist...".

  • task: Required text framing the task for the model. Example: "Map the concepts discussed in a scientific paper...".

  • expected_result: Required text describing the expected output structure in JSON.

  • definitions: Optional text defining concepts to clarify instructions. Example: "'Interest rate' is defined as...".

  • example: Optional example to illustrate concepts.

  • failsafe: Specifies a fallback if the concepts cannot be identified. Example: "Respond with an empty ” value if concepts are unclear".

[review]

  • Defines the keys and possible values in the JSON object for the review.

  • Example entries:

    • [review.1]: key = "interest rate", ⁠values = [""]⁠

    • [review.2]: key = "regression models", ⁠values = ["yes", "no"]⁠

    • [review.3]: key = "geographical scale", ⁠values = ["world", "continent", "river basin"]⁠

Value

A string indicating the result of the review process.

Examples

RunReview("example input")