To find information about an advanced topic via a chatbot, you must use the right source and question format. Tree critical steps must be taken.
Most importantly, do not use a version of the chatbot that is specifically known for making random facts up (this is called hallucination). A hallucination is when the chatbot doesn’t have information about something that you want to know and gets information about a similar word or concept and pretends it’s about the specific thing you asked.
Every instance I have read here of somebody searching for information has been through one of the bots that can only be used for chatting specifically because they lie so much (ahem…hallucinate).
Additionally, use a version that cites its sources. That way you will know if the answer your chatbot gave you was about Bing cherries like you asked or adapted from a source about cherry tomatoes. Using the free version will still have the same problem as every other free chatbot, but at least you’ll be able to double check the sources for yoruself.
Finally, learn a bit about prompt engineering. This is when you tell the chatbot how to answer a question. This is so that something that can be answered in a sentence doesn’t end up taking up two pages of irrelevant information that is shoehorned into looking related.
A good start is, “Answer with five sentences or less.” You can always ask it for more information, but stopping the endless meandering is the first priority.
Chatbots are programmed to give long answers even when they have no information about a topic so that they look more authoritative. You don’t want to read an extra two pages of text just because the bot wants to make itself sound important.
By getting the right chatbot (source) and prompt engineering (format) you’ll be able to find actual new information without worrying if it will kill your plants instead of helping them.
For now i will enjoy reading and learning on my own while it is still legal to do so. The future generations will only use AI to get their answers to all questions so that they do not have to be bothered by learning on their own.
I like the process sometimes more than the result.
Its taken me decades to learn common sense… something that seems to be lacking in most folks and all of AI nowadays.
I appreciate how much time and effort @Brisco put into this post. The explanations really helped me to understand some of the criticisms that I’ve seen about AI generated text. Since I have an aversion to this aggregated and filtered data they create I do struggle to locate information that is clean. I have come to a point where there are these little clues that I search for in the styles of writing that help me to determine whether it was actually written by a human. Some of what was posted gives me the correct terminology to apply when describing my method to others, some of it just helps me to think more critically about how I can more easily discern and avoid AI postings.
One technique that I have tested but I cannot state to have perfect efficacy is to date filter, allowing my search results to only come from a specific year and earlier. Not knowing enough about the backend of the technology I am sure that it could be subverted by attaching false posting dates, but it has helped me a bit thus far when I was more frustrated than usual.
I use perplexity.ai now, it includes references to where it got the data from. Sometimes it’s just some random person mouthing off on Reddit. At least I know the information is bad unlike AI that doesn’t give references.
After using the AI with references I’m never going to use one without, there is too much possibility to repeat absolute nonsense otherwise (as we all know, there is a lot of that on the Internet).
I’m reasonably competent posing question to perplexity. I have not found it to give valid information beyond superficial responses. Any attempt to get hard core detailed info has been met with not found type responses. This suggests it has not been adequately “trained”. On a positive note, it does give a reasonable definition of a sesquipedalian. I’ve been told that I have a sesquipedalian idiosyncrasy of using grandiloquent histrionics to discountenance impecunious abecedarians.
Anyone interested in this topic should watch the video @tbg9b posted. It discusses a major problem that AI may or may not be able to deal with if the object is to improve the generation of useful facts with each new version of AI. It also shows how there is a potential to generate biased information intentionally or unintentionally.
Maybe I (finally) turned into an old man yelling at clouds but I really really don’t understand the hype behind all these AI-s, and I am a life long engineer who spent a portion of career in machine learning field. Like, sure it’s useful and very powerful - you can now run models that our human brains cannot do intuitively. It is a revolutionary technology. But jfc people, stop treating it like it’s some sentient deus ex machina. It’s just data thrown into a math model. It’s supposed to be a helpful tool, not a replacement for our traditional techniques of making sense out of the world around us.
Haha fair enough, I guess I took too roundabout way of saying it, but I guess what I am trying to say is - you can’t really use bots for this. Fruit-growing is a factual topic from the realm of “hard science” and if you use chatbots, the amount of fact-checking and noise-filtering necessary almost defeats the purpose. You could dumb it down and just collect the sources and then read them but then it’s just a google search with extra steps.
The further you go along the common knowledge to expert line the more hallucinations you get. If you want a quick proof of how damaging they can be just ask any bot for information on the cold-hardiness of some varieties you’re familiar with.
Imagine losing your orchard 5 years down the line because you blindly trusted a chat bot and planted the wrong trees.
I also think that many people greatly over-rate what these bots are capable of, but I more compare them to standard Google etc search where they can give you a concise answer much more quickly. For example here is one question I just asked perplexity and the answer:
====
Please make me a list of russet apples which ripen in September
Here’s a list of russet apples that typically ripen in September:
Ashmead’s Kernel
Golden Russet
Roxbury Russet
American Golden Russet
Egremont Russet (typically ripens late September to early October)
Hudson’s Golden Gem
St. Edmund’s Russet (usually ripens late September)
Zabergau Reinette (often ripens in late September)
======
Not bad, eh? And again I can follow up on the sources to confirm and to get more details. Ask focused questions, look into the references it gives, and the tools will save you a lot of time.
It just glued together words it found on the Internet. It’s worse than stupid, it’s not even thinking, it’s just gluing. But it has a lot of words and is pretty amazing with the glue.
Thanks for the tip on using Perplexity. I just asked it a question about harvesting Drippin Honey pear. Its response was very good. Also it provided 3 references showing where it got its information. The 3 references where Growingfruit.org!
The fact that it provides reference for some of its statements is a big plus for me.
AI’s are inundating important software projects with bogus spam bug reports. Developers are fed up with AI generated bug reports wasting their valuable resources tracking down garbage security vulnerability reports.