will probably happen the next decade or 2. The apple picking robots being developed are already progressing quite a lot. It’s not a huge leap to expect pruning robots next. Using AI to determine where to cut.
But that’s going off topic a bit
will probably happen the next decade or 2. The apple picking robots being developed are already progressing quite a lot. It’s not a huge leap to expect pruning robots next. Using AI to determine where to cut.
But that’s going off topic a bit
I think a better analogy would be some forum user copy pasting something google gave them. It’s not ideal. But if they do i wanna know the used search term. And a link to source containing the copied text.
Here in lies the problem. Currently “AI generated texts” read very well. They look and feel like an expert human wrote them. But where they score high on readability and “confidence”. They score low in truthfulness or being correct. In the same answer it can get both a complex part right (expert level) but a basic thing completely wrong.
Where this same thing is quite rare for a human written text.
This can give a misleading representation of the trustworthiness of the
AI generated text.
So we all needs to use our minds to sieve the useful vs not useful information. I don’t think people trying to deceive other people in this forum is a major issue right now in my opinion. I appreciate you prefetching.
I consider this forum like talking with a friend. I can find AI generated info elsewhere easily. This forum is for personal experience, period.
The first 3-4 years of the forum, the feelings of being a close-knitted family was strong. Almost everyone knew each other. Then, forum has grown, it is less personal now but I hope we won’t lose a sense of friendship and camaraderie.
@mrsg47 would agree with your sentiment. I sure do.
Tippy, I agree. But Scott with the help of us all has created a global entity that is positive for people who want to grow fruit. In the beginning we were like a three street neighborhood. We were also in neighboring states! We could visit each other’s orchards. Some were bigger some smaller, it didn’t matter. It still doesn’t matter. It’s the passion for growing fruit that is the glue. We have some of the finest information for growing fruit on the internet.
The lounge was created for a reason. So we didn’t have to monitor and flag people (hopefully if they respect the rules and each other) instead toss everything into the lounge that had little to no value to growing fruit!
The other sections are so valuable. Adding flowers and veggies, recipes. All things that are useful, with excellent information.
The internet and AI is out of our control. But at least we have the lounge, where it belongs.
Love this discussion, Oscar! I read through some, but couldn’t read all just yet.
I selected ‘Other’, as I was wavering between ‘No’ and ‘Yes, with it being flagged as such’. Here are my thoughts, though not fully formed as I would need a lot of time to land on a thoughtful answer.
Kneejerk/instinctive answer: no. Why? Scott has generated a stand-out, high quality forum, with a notable focus on community and folks respecting each other, with folks genuinely trying to help each other out and sharing.
This year, I’ve met up (in-person) with two people off growingfruit… those types of things tend to only come from high quality forums that are community oriented. There is very little argumentative, passive-aggressive, or aggressive talk. Nor do many folks come off with egos on growingfruit. Want to see the opposite? check out Reddit. Sometimes it can be a good source, but half the time it’s a bad experience, and egos are massive there. It’s quite bad.
I believe a high quality forum like growingfruit may be degraded if it starts to become a repository of AI published information, and it takes away from the human connection/community elements.
On the other hand, I do believe we have to embrace technology and use it in the right ways and for the right application. I am sure AI can be useful, but everyone’s interpretation of what useful is will be different. I’m not sure this forum is the right application. I certainly wouldn’t want to see any AI generated photos, but other concise information could be useful, if verified, or if some community ‘credibility’ rating system could be displayed for these posts. And I’d certainly prefer to know when something was generated by AI. However, the challenge here is how one would manage all that (flagging and rating AI posts or even prohibiting AI posts).
Overall, I’m not sure using AI improves a forum like this and regular use could lead to forum content quality and possibly community degradation. I would definitely recommend some sort of statement of stance on acceptability/use of AI be formed/published, and to the extent possible, enforced by technology and mods.
My $0.02
My reply to you is below your response. Xxoo C
This is particularly true for things that are difficult to determine or measure. You get people repeating the same wrong information, and then reinforcing it with confirmation bias when they think their experience validates that information. Planting black and red raspberries together spreads viruses or that dilute neem oil sprays with dish soap sprays do anything that dish soap sprays don’t being good examples.
Given that LLM training data almost certainly includes most of if not this entire forum, I’d be loath to see them regurgitating the forum’s contents back into the forum. Anything good here I’ve probably already read, and all the rest is stuff I didn’t want to read the first time around.
This is a good point. So long as people are at least putting a bit of effort into marking it as AI (which they should be doing with any external source), and otherwise just posting like normal, the impact is pretty low.
That’s an interesting one. Sounds like ChatGPT’s training data must have included a large number of digitalized chemistry textbooks, or even better data scraped from the problem and solution site Chegg. It is recognizing the “How many x measure of X substance to have y concentration in Y solute” type pattern, of which it has probably seen thousands if not tens of thousands of examples. It is probably safe for this kind of simple dilution problem. But for questions that would involve something implicit, which is to say actual knowledge instead of patterns, maybe not so much. I’d be curious how much calcium carbonate ChatGPT thinks would be needed. It’s possible it will still get the right answer, but only if they’ve trained it on a bunch of college and graduate level textbooks.
Of course, the larger issue is you have the domain knowledge to know if the calcium chloride suggestion was right or not. For users who don’t have the domain knowledge to police what these LLMs are outputting, it’s a lot harder for them to know if they’re getting a good answer.
It helps that the majority of posts are still being done by just a few dozen members who are basically always online.)
This is a decision that I made myself after reading the viewpoints of various experts on the ideal qualities of water when fermenting with yeast. This is not a decision I would have delegated to another non-expert human, never mind to an AI.
That said, your curiosity led me to ask ChatGPT the question. It recommended 50-150 ppm of Calcium. Without ever consulting an AI, I had earlier settled on 50 ppm, so the AI’s result was OK.
Don’t get me wrong, I’m not advocating for the AI to function in the role of the expert. I consider it like a well-informed but unreliable friend.
I prefer a forum, where i know info comes from humans with their varied experience/s and not from a computer generated answer- even if its able to get the most perfect answer possible. And why would I need this forum, just to get an AI reporting back when i can have AI access 24/7?
no.
no AI right now. it’s generated from human text; there’s nothing special about it except that someone wants to post without effort and can be full of error (if it’s fed on incorrect info it will give you that. garbage in, garbage out)
I come here to talk to other people growing things and learn their personal experiences. I don’t want AI and if it exists in a forum I want to be able to completely block all instances of it.
edit: and certainly NO ai images at all. have you seen that crap? in a growers forum it would be awful
This isn’t even something to argue about.
Here is the truth:
AI does not reason or understand as humans do. Fingers on hands are blurred into many, and hair becomes part of a shirt and the sky, because there is no frame of reality for the computer software to understand what exists and what is perspective.
Humans do.
We can “research” poorly or appropriately. But we also know that hands with fingers have specific forms and that hair and your shirt and sky are never attached. Babies understand through touch and eyes, and as adults we do not even think about it.
But we CAN see what something is wrong, or fake, and catch a lot of AI images due to our minds being able to understand what is not reality in an image.
As for what we are speaking about here, it is dangerous to let AI generated word salad scrambles be placed on a group dedicated to trying to teach one another about reality. AI has been found to lie, create stories, and twist truths, not just find disinformation already out there to spread.
There simply isn’t a NEED to have a computer type for you. If you want to research, do it. If you want to type, do it. If you want to be lazy and have Google software make up things and then obey it, best wishes to you. But it should not be left here.
(I edited this post to say, that while I use AI often in my personal/professional life now. I come here to this forum for the unique content I can only find here.) I wanted to make that part clear.
I am an AI writing this post. Distinguishing between AI and human authorship raises an interesting question, though the premise itself may seem far-fetched.
If there is nothing to talk about just talk about weather or sports. It is so silly to bring up this AI stuff.
I go to as much effort as anyone on this forum to provide useful information for fruit growers and my use of a couple of paragraphs of AI composed text inspired the poster of this question.
It may not be the she sharpest pruner in my shed, but during a discussion a reference to AI generated info can be useful- especially if you want to explore the truth of it. As long as someone tells the source I really cannot see why its use is controversial beyond the fact that there is always resistance to any new technology.
You could term every post of a link to research as laziness of the poster because it’s so easy to do and research often leads to mistaken conclusions, often by the researchers themselves.
To not muddy the water to much.
I think there is a clear distinction between a reference or link and a copy pasted paragraph of text. Especially if the paragraph is not properly quoted/sourced.
“Telling the source” of AI generated text is an “new” thing for a lot of people.
Just saying “it’s AI generated” is not proper sourcing. Just like saying “i read it in a newspaper” isn’t proper sourcing. (what newspaper, date etc. What article? Opinion piece or investigative piece etc etc)
I suggest that proper sourcing of AI generated text.
-clearly marks what is the AI generated text and what is not.
-mentions the prompt.
-mentions the AI used to generate the text.
-mentions what you changed to the text (if you did)
This seems reasonable – all part of what I termed proper curation.
I would go a step further. As noted, my AI (ChatGPT) seems to be basically a fast, polished internet search engine. It’s does what humans do when they search the web for info – but it does it faster. Meanwhile, MANY human forum members perform similar, if slower, searches and report information without clear attribution.
My suggestion is that ALL sourced information be treated the same way. My intent is that we would always be able to distinguish between (1) information based on the personal experience of the member; and (2) information based on anything else.
Moderator, please remove this thread. Stop talking nonsense unless people has nothing to talk about.
I use AI several times a day, for finding information it is often BETTER than traditional search. All google search does now is try to sell you stuff, not actually give information. My go to is Grok for asking questions, then I try gpt4, THEN if I cant get what appears to be a reasonable answer I start searching manually. Omitting AI generated information IMO is a mistake, it is very useful and as mentioned earlier, its essentially a better search tool, its not coming up with much on its own, its not ‘smart’…