Fruit growing knowledge, this forum, and AI

Thank you Scott — I truly treasure this site as one of the last best places on the web for things you (mostly) know were posted by humans with direct (even if fallible) experience, not human-centipeded by LLMs.

I would personally support any norms you setup to limit the quoting or regurgitation from LLMs on the site.

As it is now, I just mark anyone who consistently posts LLM summaries as their primary contributions to this forum as ignored forever. It doesn’t help with the meta-commentary, but it makes the site read cleaner, and I feel like it allows me to refocus my attention on the experts I want to hear from, like @Richard or @tonyOmahaz5 or @mamuang or @weatherandtrees or @JustPeachy or @Olpea (just to name a few people).

7 Likes

@TheDerik, instead of trying to find the video discussion I watched, I ask AI about it. This is complicated and up to an individuals interpretation. Nicely it found the YouTube video and had this to say:

Based on recent 2025-2026 AI industry analysis, your assessment aligns with the sentiment that much of the immediate, incremental improvement in Large Language Models (LLMs) has come from optimizing existing functional blocks rather than fundamentally changing the Transformer architecture

. However, this “tweaking” has resulted in significant performance gains that many consider to be breakthroughs, particularly in how models reason and interact with data.

Here is a breakdown of how “tweaked blocks” versus “major techniques” are driving current LLM progress:

  1. Functional Block Tweaks (Incremental, Optimization)

Instead of scrapping the Transformer, researchers are refining its components to make models faster, more efficient, and more capable:

  • Multi-Head Latent Attention: A significant tweak to the attention mechanism used in models like DeepSeek V3/V2, improving performance.
  • Mixture-of-Experts (MoE): Refined routing mechanisms that activate fewer parameters, enabling faster, cheaper, and more efficient inference.
  • Precision/Quantization: Optimization of weights (e.g., to 4-bit or lower) to run large models on smaller, cheaper hardware.
  • Context Extension: Better positional embeddings and attention techniques to manage longer, more complex context windows.
    https://www.youtube.com/watch?v=EV7WhVT270Q

The second source AI showed was from Reddit. I didn’t read it and I don’t know the validity of it.
https://www.reddit.com/r/LocalLLaMA/comments/1ml77rq/the_llm_world_is_an_illusion_of_progress/#:~:text=Comparing%20LLMs%20is%20a%20total%20illusion%20at,a%20very%20tricky%20task..%20if%20not%20impossible.

1 Like

I dont think anyone suggested changing the way this current site works at all, it would remain the same, but there would be a tool that would allow easier searching and the ability to compile similar info from often hard to find original threads into a single source. External LLMs are referencing this site, but likely they just grab a tidbit here and there and dont take into account its accuracy or the credibility of the poster at all.

What you talking about Willis?

image

Literally the first line of the OP. Sounds like fruititutifruit bot to me….

I still say ai diarrhea ontop of ai diarrhea is still chit.

1 Like

my interpretation of what was suggested would be to use AI to organize and rank and conglomerate information into one place or tool, not create new information. Also I dont think the suggestion was to change anything about the current site, just add some functionality for those wanting to utilize it…. am I wrong @bigiggye

I see how it could be interpreted that way, you have a point.

1 Like

Yes, to reiterate, I am not, nor is anyone else, suggest adding bots that dilute the threads with AI slop. I am suggesting we use AI to process the vast trove of user experience and make it easy to display and query. This would be in parallel to the forum, not a replacement/enhancement.

Imagine searching for a specific variety and seeing a map of every user that mentions this variety or has it on their profile. You can find someone in your neck of the woods who you can turn to to ask question, otherwise you’d never find them.

Now cross reference this with 3rd party soil and weather data and imagine the patterns it uncovers. There is more data here than any horticultural research team can produce in a lifetime, but its not being used to its true potential. AI can turn anecdotal evidence into statistically meaningful information.

I agree that this does not need to be hosted locally. But we are sitting on a gold mine of good data and it would be a shame if we don’t take advantage of AI to connect all the dots and find patterns that we mere mortals miss.

1 Like

You could literally do this right now if you wanted because that gold mine has already been ingested by the major LLM platforms.

1 Like

@danzeb
The primary limitation of LLMs and current AI methods in general is that they cannot employ temporal reasoning, because the core computational engine is statistical regression. Ironically, this is also the cause of the enormous power requirements of an AI data center.

I think you just need practice writing queries. For example:

list all users on site GrowingFruit.org who grow Flavor Grenade Pluot

However, be warned that regression inherently contains approximations. When I made the above request to Google, it erroneously stated that I have multi-grafted trees.

Not really. The data are separate from the LLM. It appears that way when executed from the search bar of a web browser (e.g. Chrome, Edge) because the LLM queries the web brower’s indexed catalog of the internet. You can, however, obtain an LLM independent of a web catalog and then point it at your own database of information (e.g. data from horticultural trials of one or more cultivars).

2 Likes

Yes, I use AI all the time now.It’s gonna come on this site one way or the other the genie is not going back in the bottle.

Guidelines and some rules would be a really good start. just ignoring AI and thinking things can go along the same way it was before AI is not real realistic in my point of view that’s my 2 cents

2 Likes

AI gotta advertise.

… but the greater risk is that consumers substitute AI chat for keyword search, reducing the revenue (1) to search engines, (2) to advertisers, and (3) to content creators. AI advertising heralds the death of keyword search, but, OTOH, it heralds the death of click-based Web advertising generally, so it’s not a completely bad thing.

I will again re-iterate my previous question here:

Who do you consider will be paying for this AI (slop bot) integration?

These services are not free. And you would want the best (i.e. more expensive) models.

Clearly, you already have access to LLMS. I suggest prompting the model of your choice to specifically ask how YOU can use its capabilities to a) scrape this site b) format data in an efficient way for future LLM parsing and c) provide the services to you that you desire it to have. You don’t even need to know how to code to do this as the model will do the heavy lifting.

Additional pro tip: make the code open source and put it on github so anyone can use it. By doing so you will have essentially created the bot you and others may desire.

This all works now, at this moment, for you (or anyone else who wants it) without raising any of the multitude of questions associated with integrating ‘AI’ in this forum

1 Like

I asked ChatGPT this question:

“What do people on growingfruit.org say about growing persimmons in zone 4?”

Chat GPT just quoted me a lot in the response.

There’s a reason I keep re-editing some of my old posts dozens of times… the ones that read like reference material are likely to be found by searches.

2 Likes

Like many people I prefer to read human responses on the forum vs. A.I. If I want something fast, I just use A.I. myself.

Part of the problem I’ve noticed with many forums is that newer inexperienced, though enthusiastic, members are eager to enter the conversation, but really don’t have enough background to answer questions accurately (even though they try). So you have to read quite a lot of the forum to figure out the people who do have the experience you are trying to mine.

I don’t know what the answer is to that. It’s really frustrating if you don’t know something about a topic (let’s say something like refinishing a guitar) and want a quick answer. If you ask a forum, you might or might not get good information because you don’t know if your getting info from someone who refinished his kid’s $50 guitar, or if it’s someone who has refinished a dozen antique Martin guitars. You have to kind of hang around the forum for a while to figure it out.

I do use A.I. As @CRhode points out, I use it mostly as a search engine. To me, it’s just a faster Google search.

I still get frustrated at the info sometimes, but I’m not using a subscription service (which I understand is quite a bit better). I suspect eventually A.I. will drive you to products for ad revenue, or be subscription based.

Google used to be a decent search engine, but now they just drive you to the “sponsored products” which don’t necessarily match your search criteria. Even the “non-sponsored” products are really sponsored for the first few pages, I think. I suspect the same thing will eventually happen with A.I.

Earlier this morning I had a question for which A.I. gave me a weird answer. My question was fairly specific. In background, I bought a Jacobs 20n drill chuck which handles drills with up to a 1” shank. I needed a chuck key for it (the one I bought didn’t come with a key) so I thought I’d snag one on Ebay. It takes a key with a 7/16” diameter pilot stub (Jacobs K5 key).

However, I thought the Jacobs K5 keys were sort of pricey on Ebay. I suspected an old Supreme number 26 key would fit it, which are cheaper on Ebay, but wasn’t sure, so I asked Grok.

Grok told me the keys were interchangeable, but when I asked further dimensions, Grok said the pilot diameter of the Supreme Key was .422” and the pilot diameter of the K5 was .440”, a difference of .018”, which is a lot if you want a snug fit. Moreover, Grok told me the K5 key with the .440” diameter was different than 7/16” to allow for some slop because you don’t want an interference fit with the key and the hole. Keep in mind 7/16” = .4375” which is smaller than the .440” reported key size. I’m not sure how it missed that simple logic.

I questioned Grok harder about the discrepancies and it owned up to picking the numbers from single sources. In the end, it did express (with a high degree of confidence) the keys are interchangeable. So I suppose I got my answer.

I wish I had saved the “conversation” but I deleted it. Probably a good thing, because had I posted the conversation here, it would be even longer than this post. :joy:

2 Likes

Here’s another problem on forums. It happens on other forums too:

The AI when quoting me is subject to the same “bandwagon” effect that occurs on all forums.

Multiple variants of “my two cents worth” have appeared in this forum topic, and AI results for the phrase have not (yet) kept up with the times.

The US cent is no longer in production. Rounding to the nearest nickel is now done for cash transactions.

But 2 cents worth rounded to the nickel (zero cents) would make that opinion worthless, so it should be rounded up to a nickel. “My nickels worth”.

In some regions “my ten cents worth” is already in use, that phrase covers the current situation nicely, plus you would get a nickel back in change. So if you had a nickel for every time you expressed an opinion, you could eventually become rich.

If continuing to use a two cents phrase, the “.02 cents worth” variant is more accurately “.02 dollars worth”.

1 Like

There’s charm in a forum of humans talking to each other, however imperfect. I scroll straight through all generated responses. That’s all I can really say without being perceived as unkind. I love reading through even the slog of off-topic chatter. I’m here for the mix of anec-data, expertise, and community. I’m not here for efficiency. I unironically realize this isn’t the OP, but it seems like a good place to voice this opinion…

10 Likes

It is typical for culture to evolve beyond the origins of phrases. Yet, many phrases persist long after most have forgotten what they even originated from, so the “two cents” phrase is in no danger of extinction.

1 Like

I think that’s overly kind. My outlook is that if they can’t be bothered to write their own post, why should I be bothered to read it or respond.

5 Likes

It always takes me a long while to dredge up recollections of how the Internet used to be. One recollection is of DMOZ, which implemented a curated ontology of links. It is represented at the current day by its successor project:

DMOZ allowed content creators to propose their own links for inclusion at specific leaf nodes in the ontology graph. Curlie does the same. These links are then manually reviewed before being incorporated. One supposes there to be some sort of periodic review, too.

In the absence of keyword search, we would of necessity fall back on these kinds of ontologies.

Search Curlie
Arts
Business
Computers
Games
Health
Home
Kids & Teens
News
Recreation
Reference
Regional
Science
Shopping
Society
Sports
World