Be careful on what you read, it could be AI written and Dangerous. AI-authored mushroom foraging guides cause concern among experts | Al Mayadeen English.
Artificial intelligence
is
worrisome for a lot of reasons…not just hunting mushrooms.
That’s par for the course, you can’t learn to forage through them books.
The people that work on their own cars will understand this one. You know that there is a baseline level of knowledge to anything you do there; you understand how the engine and components work, where things should be in general, the in and outs of bolts (whether they are stuck or unconventional), know how to jack up and secure the car in order to work in it, and even how to organize your tools and parts so the whole work flows.
Now imagine expecting somebody with no knowledge of cars to learn auto mechanics by handling them a Haynes manual of their car. It is not going to work so hot when they snap their first bolt on the engine case and the book procedures are not even saying anything about that. To you and I the Hayes manual is great; all the torques, bolt specifications, and tidbits that makes life easier. It assumes you know how to extract stuck bolts and the best tools for the task.
Learning foraging is the same way; give me half an hour I can make you a subject matter expert in a singular mushroom. I can make sure that, if you just put in just 1/4 ounce of common sense, there will be no way in hell you could confuse it with anything else. You can’t achieve the same through any books because the pictures suck and you don’t know if you are seeing what the words are saying you should be seeing.
For wannabe foragers:
- Find somebody to teach you the ropes.
- Pick only the easy ones.
- Learn the ones you will always consider suspect because there are lookalikes that will make you regret it.
- Stay away from the deep end of the pool unless you really know how to swim.
Makes me think there will come a time when recorded information is broken into “before AI” and "after AI; in fact, that’s really what we’re doing right now. The initial period will be full of suspect information, then gradually AI generated material will improve, and then it will be preferred.
Just my two bits worth of mulling.
In the big scheme of things it makes no difference. People where an ignorant superstitious mess before the Gutenberg press made knowledge more readily accessible, they are an ignorant superstitious mess today. If somebody cares to take a wager I would bet they will be an ignorant superstitious mess in the future.
Not everybody learns the same … some listen well in class and ask questions, others get it by studying the book and doing homework. And some only learn by doing, as in on job training.
(And some of the younger set seem to learn nothing if they can’t find it on their i-pad!)
The innovation has already occurred. This is not gradual. What is being called generative A-I today are models of English syntax that, given a few words, can predict the next word to follow. What they produce is plausibility, not truth.
As Pontius Pilatus once said, “Yes, you can improve plausibility for sure, but you can’t make it be true.”
Funny, I thought that was Abraham Lincoln. Or was it Yogi Berra?
^more on AI+foraging
^I could read this posting without logging in to Facebook.
^comments from an expert in the foraging field. I am familiar with Dr. Kallas, he used to live in my neighborhood and published my article about thimbleberries back in the late 1990’s.
I’m gonna have to disagree somewhat. You can learn good habits from any reliable source, and bad habits from any bad source (or poor interpretation of a good one). The biggest issue I saw isn’t the source of information, but the approach. Most people try to approach identifying plants and fungi from the end of matching a picture or confirming an ID. The proper way is, of course, ruling out what it isn’t. Learning what the handful of dangerous species are also helps.
If you follow that approach, and don’t eat anything when there’s ANY question, then one can learn foraging (or just about anything) from a book. So long as the book is reputable, of course.
If you don’t follow that approach, your gonna have a bad time no matter what your source of info.
Some thoughts on AI and truth, from someone who’s a philosophy and theology major and just went back to get a quick CS degree a couple years ago (and have used AI/ML for years in various capacities).
There are many types of AI (artificial intelligence), but much of the current popular interest is in a type of ML (machine learning) known as deep learning; this includes most of the transformer-based models (GPT, DALL-E, etc) as well as competing techniques. Deep learning simply means that multiple layers of a computer program are used to take a series of tokens (like words or image pixels) and make “sense” of them, where “sense” means categorizing the tokens: (sometimes) first by “simple” categories like language, color, etc, then progressively into more abstract categories like meaning, type of animal, etc. There is some debate as to whether this is how a physical brain works, but the results are pretty compelling. I’d be wary of saying an AI model is “just” predicting the next word in a sentence, as that seems to devalue what’s going on. It is quite possible that is how the human brain works when a person is speaking a sentence aloud or thinking about what to type next.
And, indeed, that is one of the biggest dangers with AI: not that it isn’t human, but that it’s *too* “human”. It gets facts wrong, and then makes up all sorts of justifications for the falsehood. Just like a human out in the “real” world, an AI model gets its info from all sorts of sources, holds false “beliefs”, and can give difference answers depending on the “mood” it’s in (ie, prior inputs and outputs). I don’t want to say that is a problem, per se, because sometimes that’s exactly the correct tool for the job, such as art. But it’s important to recognize that malleability within the tool, which may not be what some people are expecting.
I’m also beyond skeptical of claims (which I have read elsewhere, from a very vocal minority) that AI models cannot create new things. The idea is that AI just spits out what’s been put in (like training on human artwork), whether that be in whole or in some kind of mashup. But that idea doesn’t hold up. For example, where are all the images of 6-fingered people that DALL-E was trained on? There aren’t many, or any: DALL-E was able to create totally new content (even if a bit off-putting to us) in a way that is indistinguishable from—if not surpassing—how human artists imaginatively create new, never-before-seen artworks.
Onto truth more generally, without getting into more of the technical side there (ie, the 2 major ways to approach truth: religion (finding big-T Truth) and science (finding falsehood)), there is nothing that inherently makes an AI model false or a human teacher correct. There are plenty of stories, for example, of a local telling someone that a given plant was “wild carrot” when it fact it was none other than poison hemlock (Conium maculatum). Ensuring that a given piece of information is correct (at that you’ve heard it correctly, understood it correctly, and can further transmit it correctly—or at least correctly “enough”, for some definition of the “enough”) is a very difficult *series* of problems that entire *fields* of study are dedicated to. But yes, it’s a good idea to do your best to verify the info you come across, whether it be from a human, an AI, or a source you can’t discern (which is likely to be the case going forward). That a source is “published” (on Amazon, in paper, etc) is not a guarantee of much of anything except that someone payed to have it published.
Agreed, there are plenty of legitimate concerns about AI, particularly around it being a force for various types of bigotry and imperialism. It has all kinds of biases from the training data (and in some cases filters) that make it conform to beliefs (such as Utilitarianism and democracy) that are very common amongst certain Western, White sources, even when those beliefs have led to the marginalization and/or elimination of other beliefs, cultures, and/or peoples (sometimes quite forcefully); that’s probably as specific as I want to get about that on a forum about gardening . It is critical to understand that *no* source (whether it be an encyclopedia or an AI) is ever “neutral” or truly “objective” (and I do not believe that there is such a thing as either philosophical neutrality or objectivity, in the most technical sense).
I think this take about superstition is pretty solid. Technology is largely a multiplier: it can increase the speed and scale of things, but the underlying problems (like disinformation) are usually social problems, not technological problems. And for my hot take of the day, I would believe something written by today’s AI over something on social media without hesitation—not that AI is error-free by any means, but I think people underestimate just how much social media can multiply the worst in people and, as a developer, I have made it a point not to participate in *any* of the “traditional” social media-type platforms.
Interesting, but lots of peculiar things entertain some people. You have some fruit trees do you?
When I’ve perused mushroom books and noticed in descriptions “edible,” “inedible,” and “unknown,” I’ve often wondered how many lives have been lost in determine which category to place a specific variety. Then I wonder if there are people out there who take the “unknown” classification as a challenge.