Artificial Intelligence (AI) and Literature Searching
AI tools can assist in a number of ways with the literature search process including:
AI tools are powerful but they do have limitations; They are designed to complement human knowledge, not replace it!
Being AI Literate does not mean you need to understand the advanced mechanics of AI. It means that you are actively learning about the technologies involved and that you critically approach any texts you read that concern AI, especially news articles.
The following tool - ROBOT - can be used when reading about and using AI applications to help consider the legitimacy of the technology.
Reliability, Objective, Bias, Ownership, Type
Reliability |
|
Objective |
|
Bias |
|
Ownership |
|
Type |
|
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry.
https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
AI Tools to support Literature Reviews
Elicit
Elicit can be used to scope out articles for a literature review and to help find papers that may not show up in health databases available at the Health Library. See the links below to learn about the strengths and weaknesses of using Elicit for research.
|
|
Perplexity / Perplexity copilot
Perplexity provides an alternative to traditional search engines, where you can directly ask your questions and receive answers based upon a curated set of sources.
|
|
Consensus
Consensus is a search engine that uses AI to read through peer-reviewed research and extract the key findings from each paper. Consensus source material comes from the Semantic Scholar database and uses large language models (LLMs) and other search technology (Vector search) to surface the most relevant papers.
|
|
Research Rabbit
ResearchRabbit is a literature mapping tool that uses one or more seed papers to then suggest and visualize relevant literature and researchers.
|
|
Connected Papers
Connected Papers builds a visual graphic using one seed paper to all similar papers in a chosen field. Connected papers sources papers from the Semantic Scholar database.
|
|
Evaluating AI Outputs
Try evaluating the output of a ChatGPT - does it pass the CRAAP test?
How Current is the Information retrieved from the Chat query?
The information that generative AI tools are trained on may not be the latest literature available, so they may not have information about recent events or sources.
Consequences of incorrect usage of AI Tools
Critical analysis of research will ensure articles such as the one below do not pass through undetected. In this example an AI tool was used to write the paper, and authors did not disclose this when the paper was submitted for publication. The journal in question also did not conduct adequate peer review of the article before it was published. An embarrassing lesson for both the authors and the journal editors to have the paper withdrawn after publication.
Recently the library has received a number of requests for full text articles that were found in AI generated outputs.
These citations were 'hallucinations' and not real, although they did have characteristics of real articles such as genuine journals, authors and volume, issue and page numbers. However these were completely made up by the AI tool, generating incorrect outputs of the data it was trained on.
Always check the outputs and refer to a librarian if you need assistance in determining if an article is real and reliable!
How relevant is the information you are finding?
Content from ChatGPT (and other generative AI tools) can be generic in nature, and may not be a suitable for research or scholarly work.
The generated results also depend on the prompts (instructions) that you input, and usually require an understanding of how the tool works, the content, and the way it generates results.
Are the responses authoritative?
ChatGPT does not disclose where generated information comes from. It may not be possible to check whether the information has been input from sources that are qualified, experienced or authoritative. Where references and citations are used, these are frequently inaccurate or completely made up.
AI generated content may also be using copyrighted content , if the tools are trained on content created by people, those people are not credited or acknowledged by the AI tool.
Are the responses accurate?
AI generated content has been shown to be frequently inaccurate, biased or completely incorrect.
Any claims made in AI responses need to be checked for accuracy.
Is the AI tool fit for purpose?
AI tools are only as accurate as the information they are trained on, be aware the algorithms that creates the responses may have inbuilt biases. Additionally, some tools my be influenced by commercial interests for profit.
Disclaimer
The contents of this help guide are intended for NT Health staff for information purposes only. Whilst every effort has been made to ensure that the information is correct at the time of publication, it is the users sole responsibility to decide on the appropriateness, accuracy, currency, reliability and correctness of the content found.
CITING GENERATIVE AI