How to use chatbots to find information about fruit growing

To find information about an advanced topic via a chatbot, you must use the right source and question format. Tree critical steps must be taken.

Most importantly, do not use a version of the chatbot that is specifically known for making random facts up (this is called hallucination). A hallucination is when the chatbot doesn’t have information about something that you want to know and gets information about a similar word or concept and pretends it’s about the specific thing you asked.

Every instance I have read here of somebody searching for information has been through one of the bots that can only be used for chatting specifically because they lie so much (ahem…hallucinate).

Additionally, use a version that cites its sources. That way you will know if the answer your chatbot gave you was about Bing cherries like you asked or adapted from a source about cherry tomatoes. Using the free version will still have the same problem as every other free chatbot, but at least you’ll be able to double check the sources for yoruself.

Finally, learn a bit about prompt engineering. This is when you tell the chatbot how to answer a question. This is so that something that can be answered in a sentence doesn’t end up taking up two pages of irrelevant information that is shoehorned into looking related.

A good start is, “Answer with five sentences or less.” You can always ask it for more information, but stopping the endless meandering is the first priority.

Chatbots are programmed to give long answers even when they have no information about a topic so that they look more authoritative. You don’t want to read an extra two pages of text just because the bot wants to make itself sound important.

By getting the right chatbot (source) and prompt engineering (format) you’ll be able to find actual new information without worrying if it will kill your plants instead of helping them.

6 Likes

For now i will enjoy reading and learning on my own while it is still legal to do so. The future generations will only use AI to get their answers to all questions so that they do not have to be bothered by learning on their own.

I like the process sometimes more than the result.

Its taken me decades to learn common sense… something that seems to be lacking in most folks and all of AI nowadays.

6 Likes

I appreciate how much time and effort @Brisco put into this post. The explanations really helped me to understand some of the criticisms that I’ve seen about AI generated text. Since I have an aversion to this aggregated and filtered data they create I do struggle to locate information that is clean. I have come to a point where there are these little clues that I search for in the styles of writing that help me to determine whether it was actually written by a human. Some of what was posted gives me the correct terminology to apply when describing my method to others, some of it just helps me to think more critically about how I can more easily discern and avoid AI postings.

One technique that I have tested but I cannot state to have perfect efficacy is to date filter, allowing my search results to only come from a specific year and earlier. Not knowing enough about the backend of the technology I am sure that it could be subverted by attaching false posting dates, but it has helped me a bit thus far when I was more frustrated than usual.

2 Likes

I use perplexity.ai now, it includes references to where it got the data from. Sometimes it’s just some random person mouthing off on Reddit. At least I know the information is bad unlike AI that doesn’t give references.

After using the AI with references I’m never going to use one without, there is too much possibility to repeat absolute nonsense otherwise (as we all know, there is a lot of that on the Internet).

7 Likes
1 Like

I’m reasonably competent posing question to perplexity. I have not found it to give valid information beyond superficial responses. Any attempt to get hard core detailed info has been met with not found type responses. This suggests it has not been adequately “trained”. On a positive note, it does give a reasonable definition of a sesquipedalian. I’ve been told that I have a sesquipedalian idiosyncrasy of using grandiloquent histrionics to discountenance impecunious abecedarians.

1 Like

Indubitably. :wink:

Anyone interested in this topic should watch the video @tbg9b posted. It discusses a major problem that AI may or may not be able to deal with if the object is to improve the generation of useful facts with each new version of AI. It also shows how there is a potential to generate biased information intentionally or unintentionally.

1 Like

Maybe I (finally) turned into an old man yelling at clouds but I really really don’t understand the hype behind all these AI-s, and I am a life long engineer who spent a portion of career in machine learning field. Like, sure it’s useful and very powerful - you can now run models that our human brains cannot do intuitively. It is a revolutionary technology. But jfc people, stop treating it like it’s some sentient deus ex machina. It’s just data thrown into a math model. It’s supposed to be a helpful tool, not a replacement for our traditional techniques of making sense out of the world around us.

2 Likes

The original post is about how chat bots can be useful even though when used incorrectly they cause more harm than good.

There were only four positive things said about that type of software in the whole thread and all four came from you :grinning:

Haha fair enough, I guess I took too roundabout way of saying it, but I guess what I am trying to say is - you can’t really use bots for this. Fruit-growing is a factual topic from the realm of “hard science” and if you use chatbots, the amount of fact-checking and noise-filtering necessary almost defeats the purpose. You could dumb it down and just collect the sources and then read them but then it’s just a google search with extra steps.

2 Likes

Nah, I got you. I just thought it was funny.

The further you go along the common knowledge to expert line the more hallucinations you get. If you want a quick proof of how damaging they can be just ask any bot for information on the cold-hardiness of some varieties you’re familiar with.

Imagine losing your orchard 5 years down the line because you blindly trusted a chat bot and planted the wrong trees.

1 Like

Oh as someone who lives in zone 9 it’s very easy to imagine haha! I’ll go start breaking ground for my orange tree orchard here in Seattle.

I also think that many people greatly over-rate what these bots are capable of, but I more compare them to standard Google etc search where they can give you a concise answer much more quickly. For example here is one question I just asked perplexity and the answer:

====

Please make me a list of russet apples which ripen in September

Here’s a list of russet apples that typically ripen in September:

  1. Ashmead’s Kernel
  2. Golden Russet
  3. Roxbury Russet
  4. American Golden Russet
  5. Egremont Russet (typically ripens late September to early October)
  6. Hudson’s Golden Gem
  7. St. Edmund’s Russet (usually ripens late September)
  8. Zabergau Reinette (often ripens in late September)

======

Not bad, eh? And again I can follow up on the sources to confirm and to get more details. Ask focused questions, look into the references it gives, and the tools will save you a lot of time.

2 Likes

If only the chatbot AI would have read your reviews of Hooples Antique Gold it might have made the list…

I like to read… this forum is a wealth of information if you have a zest to learn.

1 Like

Scott
Did it automatically figure out your location, MD/zone 7, before coming up with that list?.

It just glued together words it found on the Internet. It’s worse than stupid, it’s not even thinking, it’s just gluing. But it has a lot of words and is pretty amazing with the glue.

3 Likes

Thanks for the tip on using Perplexity. I just asked it a question about harvesting Drippin Honey pear. Its response was very good. Also it provided 3 references showing where it got its information. The 3 references where Growingfruit.org!
The fact that it provides reference for some of its statements is a big plus for me.

Perhaps these AI’s aren’t quite as infallible as some would lead us to believe.

OpenAI’s Whisper transcription tool has hallucination issues:

Associated Press - OpenAI halucinations

Hospitals are using this software, now that’s truly scary. :scream:

2 Likes

AI’s are inundating important software projects with bogus spam bug reports. Developers are fed up with AI generated bug reports wasting their valuable resources tracking down garbage security vulnerability reports.

There’s two sides to every coin as they say.