How to use ChatGPT / Bing Chat help you grow better fruit

I run a dev agency and we’ve done a lot of work with integrating and building apps that use this AI technology for practical applications. Since it’s my job to know the ins and outs of what this tech can and can’t do, I’ve tried to use it for pretty much everything in my life - to the point ChatGPT and Bing Chat have largely replaced Google for me.

So I figure I’d do some testing on the practical uses of large language models (LLMs) to help you manage your home orchard and grow better fruit. For example, if you want to see them help me pick the next pear I should grow you can see how each one does here: :pear:Language Model Comparison - Google Slides

How to ask questions

To start off with its important to understand how to ask your questions. I’m not going to get into how ChatGPT works, but whenever writing a question I found it really helpful to pretend you are asking the entire internet and all its content a question and the answer will a distillation of the content that was most relevant to the entirety your question.

Which is why if you ask “What’s the best apple?” you will get generic answers like “Honeycrisp, Fuji, Granny Smith” similar to what you would find Googling that question. But if you ask:

What are the best apple to grow in zone 7a, Pennsylvania? I want something that’s disease and pest resistant. Give me options that ripen early, mid and late season.

You get:

Early Season: Pristine, Williams’ Pride
Mid-Season: Liberty, Freedom
Late Season: GoldRush, Enterprise

1) Be specific

Make your question as clear and concise as possible, focusing on one issue or aspect of your home orchard.

2) Provide context

Think about how posting forum questions works, the more detailed your question and the more context you provide the better the answers will generally be. Now imagine that you have a very eager to please audience that will eagerly read 6 pages of context and answer your question in any format you want.

I would strongly suggest starting off by providing an excessive amount of context and then dialing it back as you start to get a feel for how it impacts the quality of the answers you get. Include relevant details such as the location, climate, and soil type of your orchard. This can help the model provide suggestions tailored to your specific conditions.

3) Specify answer type

These models (ChatGPT, Bing Chat, Bard, etc.) will lean towards providing a general INTRO -> DETAILS -> SUMMARY response that may not answer the question in the way you wanted it to. You can always ask follow up questions to get what you want but if you are looking for a specific type of response its helpful to mention that. Some examples:

  • “I already know the basics, I want to know the more technical details”
  • “Make sure to describe the flavor in detail.”
  • “Provide the pros and cons for each one”
  • “Write your answer in the form of a table”

Putting it all together

So, taking these considerations an example of a well-structured prompt/question could be:

I lose my most of my plums to plum curculio every year, what should I spray and when?

I have a home orchard in eastern Pennsylvania zone 7a.

and

What are the considerations for growing columnar apples?

I have a home orchard in eastern Pennsylvania zone 7a, well draining soil and full sun.

You can skip the basics, I want to know the more technical details.

and

I grow the following pear varieties, what should I try next?

Harvest Queen
Bartlett
Harrow Sweet
Seckel

I’m looking for something more unique and modern with high disease and pest resistance.

Where to ask questions

There are currently several options, and many more likely to pop up in the coming months but here are the ones I am currently using. I would suggest asking the same question in at least two so you get a feeling for how they compare.

I created a slideshow that compares the responses of the services below to the same question, which you can view here: Language Model Comparison - Google Slides

Note: Things are developing very quickly in this area so what’s described below may not be accurate in a few months, for example ChatGPT will have access to plugins soon that let you search the web and connect to other services.

ChatGPT (aka GPT-3.5)

I believe this is free with some usage limitations and generally works okay if you aren’t asking it anything complicated, because when you do it tends to just make stuff up. I really don’t use this any more as the options below are much better, but its worth mentioning because this is what people mean when they say “ChatGPT”. For reference, it scored in the 10th percentile of the bar exam. Its also trained on information available up to September 2021, so it wont know about anything that happened after that date.

ChatGPT Plus (aka GPT-4)

This is currently only available with the $20/mon ChatGPT Plus subscription but is extremely powerful with very good reasoning skills - scoring in the 90th percentile in the bar exam. The answers it gives are pretty accurate, even on highly specialized questions. If you know how to ask good questions, its almost like having access to an expert in any field. Unfortunately, like GPT-3.5 it also for now has no knowledge of current events.

Bing Chat

Is free if you use the Edge browser. Depending on the conversation style you select it might be using GPT-3.5 or GPT-4 under the hood. There are two major benefits to Bing Chat: 1) It pulls additional information from web searches results to give you more factual and up to date information and 2) You can give it a URL and then ask questions about it, for example to summarize an article, to provide a different perspective, expand on a subject, etc. One downside is that its more so intended for shorter questions and shorter answers, but you can always ask it for more details.

Bard by Google

Is fairly new and still in beta but free to use. It doesn’t really get as much attention but I would say its pretty good, comparable to ChatGPT + Bing Chat combined. The way it answers questions is a bit “robotic” but Google did say its intentionally using a smaller model for now so this is likely to improve in the coming months.

Example Prompts

I shared a few examples above but there are many very interesting things you can do with this AI tech that go far beyond simple question and answering. I’ll share these in my next post because this one is already a bit long.

10 Likes

I’ve seen examples of experts in particular fields who ask about things in their area of expertise and the answer is mostly right but wrong in subtle but sometimes important ways. E.g., sometimes it seems to just make up things to fill in the gaps of the answer.

Since this is your expertise, how often does that really occur, or is it more an anecdotal/rare occurrence? For this kind of use, it would be along the lines of naming cultivars that don’t exist, etc. The most on-point example was someone asking for a list of native plant species for a particular type of planting, and two of the species it suggested literally didn’t exist in that genus.

I was thinking about just this.

I actually wonder where the dataset comes from that gives these language models knowledge of specific cultivars. Maybe academic papers and such. Maybe even this forum?

Yeah that’s a known issue but one that is actively being worked on. With regular ChatGPT I would say it was accurate 80% of the time, with GPT-4 its closer to 95% of the time and it will more often tell you if it doesn’t know the answer. Bing Chat and Bard are somewhere in between.

So I wouldn’t ask it to calculate how much insecticide you should mix with 2 gallons of water, but for general exploration of new concepts and ideas its exceptionally good.

Additionally, because Bard and Bing Chat are pulling data from web searches its highly unlikely that they will just make things up. But in my opinion because they are doing web searches the advice they give becomes slightly biased towards the results they happen to find.

I wouldn’t put too much weight in the experiences people have with these AI tools unless they tell you they know they are using the latest model.

4 Likes

Well, marketers call it AI …

1 Like

@dimitri_7a thanks for all that useful information.
At this point in time Google search has one advantage. I can see where the information comes from and decide how much I trust it. With AI I have no idea were it got it’s information from. The few times I tried it it gave me mostly correct information but also erroneous information. It’s like watching news on TV. Things controversial or complex are sometimes wrong. I try to go to the source when possible.

It’s incredible what AI is capable of, but I trust heuristics more. The old adage “garbage in, garbage out” is an ever present reality, and as @danzeb said, the opacity of it is somewhat unsettling.

I’ve played with some of the AI image generators, and it’s fascinating because you can get a read on how they work based on the images they generate. I’m a carpenter, and inputting requests for images of framing, for example, leads to some very odd facsimiles.

1 Like

Some gems for your consideration. They don’t look quite like anything I’d ever build! The faces of the carpenters are telling, I think, too of what’s going on behind the curtain. It’s really about aggregation and averaging of data points, it seems. A useful thing, perhaps, but somewhat like the proverbial monkey with a typewriter, no?



I played around with inputting requests for images of well known political figures, vs more obscure subject matter. The political figure images were often very convincing. But When I requested images related to the ice age, for example, all images bore the mark of the input data set being swayed by the popular movie, “Ice Age”. The problem is that you’re not quite sure what the data set is, and if it’s really representational of the item/subject in question. You can probe by changing the way you phrase your request, sure, but that has its limits. And the end user is still using a heuristic to determine the relevancy of the output data. So that in itself sort of proves the point.

2 Likes

Bing Chat provides the sources for where it’s pulling information from. It’s pretty cool because it kind of merges the information from the results of multiple pages. I actually think it detracts from the results a bit because to me too many of the top results in search results have become SEO garbage and it will only get worse as more content starts to become generate by AI.

Haha, that is how they work, and it a really surprise to everyone that they work this well.

When it comes to image generation I have some experience there too, having trained my own models to generate photo-realistic images. I think we are very very close to having image generation that’s indistinguishable from photos. Midjourney has probably the best image generation currently and can handle faces and hands really well. We are entering uncanny valley territory where you can tell something is wrong but it takes a bit of time to identify what:

2 Likes

That’s a whole other angle I hadn’t considered. There’s going to be a sort of hall of mirrors effect, isn’t there? That in itself is quite unsettling, because feedback loops are inherently very unstable, or maybe put another way, they evolve rapidly toward a predictable level of stability. Hard to explain what I’m getting at, but if you’re a musician and you’ve ever toyed with a digital delay pedal with the feedback level turned way up, or more old-timey would be a microphone feeding back, you’ll understand what I mean

Actually it gives me a different angle on conceptualizing what “the singularity” actually is and how it potentially emerges

Thanks for sharing your perspective @dimitri_7a Very fascinating, and also somewhat unnerving in some ways. I don’t mean to sound overly critical. I’m sure this stuff is very useful. It’s a great shortcut. Like any tool, it’s transformative, and also has its inherent limits, I know that technology is often feared and denigrated, especially when it’s new. I don’t mean to do the same. I do think a lot of the discourse around AI ignores those inherent limits, though. As I say, in the end it’s a tool and only as good as the person wielding it.

1 Like