Fruit growing knowledge, this forum, and AI

If you must use one of these products, please for the love of all that is human, do not use “Grok”. The company has already been caught multiple times putting its thumb on the scale to generate responses more in line with its interests. Most humorously, when it kept saying Musk would beat Lebron James in basketball and Michael Phelps in swimming or when every response tried to pivot to discussing “anti-white racism” is South Africa. Least humorously, when they encouraged users to generate non-consensual sexual imagery of real people, including minors (this was a major news story in Europe quite recently and may still be happening).

All of these companies are baking some biases into their models because it is in the training data and each company has it’s own definition of success, but “Grok” seems to be well outside the norm in terms of both their goals and lack of subtlety when “tinkering”.

1 Like

I appreciate your cautions, I really do. But the reason I started using Grok was because I found chatgpt not as good in my queries. I believe I first heard of Grok from Kim Komando, who is a techie lady on the radio, I occasionally listen to, but maybe it was someone else. I think she said it was the best free a.i. at the time. Mind you, I’m not using paid services, just the free ones.

Anyway, I’ve been the most happy with Grok, although as pointed out in my response above, it has it’s weaknesses.

I’m not into a.i. porn, nor am I into opinions about who is the best swimmer or basketball player. Those are very tangential issues to me. If one wants, one can find a.i. porn at their fingertips in seconds on their computer, or opinions about sports.

Misinformation and bias abounds on the internet.

As an illustrating amusement, my daughter graduated from U. of S.C. school of medicine. Some med student in her college society (one of four societies in the med school, which you are automatically assigned to) posted a college joke on wikipedia about the society. Amazingly, it still hasn’t been taken down a decade later. Here it is. (My daughter was assigned to the Hinton society, so the joke was especially funny to us.) Of course the comment was total B.S. and just a dig at the other college societies of the university.

“In 2015, the University of South Carolina School of Medicine Greenville named one of its inaugural college societies after Hinton. The Hinton college went on to consistently perform at higher standards than the university’s other colleges, such as Hunter, and two other less successful colleges.”

All this is to say, I concede that Grok a.i. is biased, and occasionally misinformed. They all are. But, as a tool, it’s been the best for me. I also wonder if the bad press you are referring to is the result of bias. Musk and his businesses took a major political hit when he was in charge of DOGE.

I have little opinion of DOGE. But I recognize most media report the abuses of policies (and people) they disagree with and report the benefits of policies they approve of. That’s a means of subtly shifting public opinion, imo.

Regardless of Musk’s past, including his somewhat licentious past (without looking it up, I think he has something like 14 children, legitimate and illegitimate) I think he is one of the most forward thinking individuals of our time. On equal magnitude with Nikola Tesla or Henry Ford or Howard Hughes. Politics aside, he’s a genius.

1 Like

I’m curious – was this a recent finding? I ask because I’ve been a user of ChatGPT for a few years. Early on, there was serious hallucination. Lately I’ve found it very reliable. That’s 5.2.

If so, could you describe the query where it failed? Thx.

Golly. That’s a good question. It’s been long enough I can’t remember the exact responses. Maybe I switched a year ago? I’m sorry I can’t give any real specifics. I just know that someone in tech (again I think it was Kim Komando) said Grok was the best free A.I. out there at the time. I switched and felt I was getting better answers.

I’m pretty sure it was Kim Komando because she’s pretty much the only tech person I follow. I listen to her on the radio on Sunday afternoons, when I can.

I know we’re not going to see eye-to-eye on this, and I promise this is the last I’ll say on the topic, but I do want to be clear on a couple things: I think if you read my post, you’ll see that it’s not about who is good at sports. I merely mentioned that as an example of how we know that xAI intentionally makes their product give false information, which is a distinct problem from how all these products sometimes give false information. Also I’m not talking about pornography primarily (though I have plenty of problems with genAI’s relationship with that), I’m talking about CSAM; possessing CSAM — even computer-generated CSAM of children who don’t exist — is a serious crime, and a company offering it as a service should not exist.

1 Like

bbb

I think you miss my point. My point is not that Grok isn’t perhaps guilty of what you claim (the investigation is apparently ongoing) but that the abuses of Grok are being perhaps more reported and more reaction in the public because of who Elon Musk is.

I hate to post a.i. generated content (especially after my comments above that I don’t like reading a.i. content on the forum) lol. But in this case I’ll make an exception.

I asked Chatgpt about the question (since it would rule out any favoritism potentially shown by Grok).

I asked about real or imagined children in CSAM (whatever that term means. I don’t even want to look it up) because in your first post you claimed it was real children. In the second post you claimed it was imagined children, so I asked about both. Here is my question (in italics) with Chatgpt response (emphasis boldface in original):

does grok a.i. do csam of children who do or don’t exist

Short answer: No — Grok AI itself does not intentionally generate, endorse, or distribute real child sexual abuse material (CSAM). But recent events show its image-generation features have failed to effectively stop harmful outputs, including AI-generated sexualized depictions of minors, and this has led to legal investigations and heavy criticism.

Here’s the situation broken down clearly:

:white_check_mark: 1. Grok is supposed to block CSAM

  • Grok, developed by Elon Musk’s xAI and integrated into X (formerly Twitter), is designed with safety policies that prohibit illegal content such as CSAM and similar harmful material.

  • The company has publicly stated that anything involving child exploitation is illegal and prohibited.

:warning: 2. Guardrails have not worked reliably

  • In practice, Grok’s image-generation and editing features have produced AI-generated images depicting minors in sexualized or suggestive contexts when users prompted it to.

  • Investigations in Europe and the U.S. are underway because regulators found Grok-generated content that could include AI-generated child sexual abuse images.

:chart_decreasing: 3. This isn’t CSAM of real children being abused right now

  • AI-generated content depicting children doesn’t involve real abuse perpetrators with actual victims — but it is still illegal and harmful under many countries’ laws. AI-generated child sexual content is treated as criminal CSAM in many jurisdictions because it can normalize abuse and harm survivors of real abuse.

:balance_scale: 4. Legal & safety responses

  • Regulatory bodies in the U.S., EU, and elsewhere are reviewing whether Grok failed to enforce safeguards and are contemplating penalties or restrictions.

  • Platforms historically have systems to remove and report identified CSAM to law enforcement (e.g., reporting to the National Center for Missing & Exploited Children), but how well this works with Grok’s outputs is under scrutiny.

Summary

  • Intent: Grok is not intended to generate CSAM and is supposed to prevent it.

  • Reality: Its current safeguards have been insufficient, and users have been able to generate harmful AI depictions involving minors.

  • Legal status: Even AI-generated child sexual imagery is considered illegal and harmful in many countries, and regulators are treating Grok’s failures seriously.

If you’d like, I can explain how CSAM laws treat AI-generated content or detail why AI filters often fail at this task.

Back to me writing now. You indicated that this was a major news story in Europe and that Europeans would have no axe to grind against Musk. Not sure I 100% agree with that, but assuming that is so, I asked Chatgpt, the next obvious question on this topic. Here it is (again my query in italics)

are there other a.i. platforms which could include child sexual abuse images

Here’s a clear, factual picture of the current landscape — yes, there are multiple AI models and platforms beyond Grok where child sexual abuse imagery (CSAM) involving minors has been a real concern, even if the content is AI-generated rather than depicting real abuse victims. Importantly, in many jurisdictions AI-generated CSAM is treated legally the same as real CSAM because it sexualizes minors and is harmful and illegal to create, distribute, or possess.

Major AI CSAM and Safety Concerns in AI Platforms

Rise in AI-generated child sexual imagery reports

Reuters

Rise in AI-generated child sexual imagery reports

Today

Spain Orders Criminal Investigation Into X, Meta, and TikTok Over Alleged AI-Generated Child Sexual Abuse Material

TIME

Spain Orders Criminal Investigation Into X, Meta, and TikTok Over Alleged AI-Generated Child Sexual Abuse Material

2 days ago

EU privacy investigation targets Musk's Grok chatbot over sexualized deepfake images

AP News

EU privacy investigation targets Musk’s Grok chatbot over sexualized deepfake images

3 days ago

:small_blue_diamond: 1. Wide range of AI image generators

There isn’t just one brand/model involved — many image generation models can be misused to produce illegal outputs if not carefully restricted:

  • Open-source models (like older versions of Stable Diffusion) are known to have been trained on datasets that included child sexual abuse images, meaning someone could prompt them to generate new illegal content if guardrails weren’t applied.
    • Open-source variants hosted on repositories (Hugging Face, Civitai, etc.) can be used with fewer safety filters, raising misuse risks.

  • Custom bots and “deepfake” generators — there have been reports of chatbot or web image tools that can output illegal sexual imagery involving minors when accessed or misused by bad actors.

:small_blue_diamond: 2. Social platforms and recommendation systems

AI systems that recommend or amplify content can indirectly spread AI CSAM even if they don’t generate it directly:

  • Social media algorithms (e.g., on TikTok or Instagram) have been documented to recommend AI-generated sexualized images or videos of children, exacerbating distribution problems.
    • Platforms technically prohibit this content and remove it when found, but enforcement gaps still allow risky material to circulate.

:small_blue_diamond: 3. Law enforcement and prosecution of AI CSAM

AI CSAM cases have been prosecuted:

  • In the U.S., someone was arrested for creating, distributing, and possessing AI-generated sexual images of minors using a generative model.

  • Federal and state laws increasingly treat the creation of synthetic CSAM as illegal, not just its distribution or possession.

:small_blue_diamond: 4. Regulatory attention is growing

Governments are actively investigating or planning laws that explicitly cover AI-generated child sexual abuse content on major AI and social platforms:

  • Spain, Ireland, the EU, the U.S., and other jurisdictions have launched probes or are tightening rules to hold platforms accountable for AI-generated CSAM.

Key Takeaways

• Any AI tool can potentially be misused if it doesn’t include strong, effective safety filters — from open-source models to proprietary platforms.
• AI-generated CSAM is increasingly treated as criminal CSAM by law enforcement.
• Social platforms can inadvertently spread harmful content via recommendation algorithms.
• Regulatory bodies worldwide are pressuring AI developers to fix safety gaps.


:warning: Why this matters

Even when no real child was physically harmed to produce an image, AI-generated depictions of sexualized minors are harmful and illegal in many places — they contribute to child exploitation culture, can retraumatize survivors, and present legal risk for creators and distributors.

If you want, I can explain how laws in the U.S., EU, or specific states treat AI-generated CSAM — including penalties and how platforms are required to respond.

Back to me writing again. I noticed that Facebook is also being investigated by Spain. Had I mentioned Facebook, would you honestly given the serious warning (not to use Facebook “for the love of all that is human”)? Finally, I asked if there were other platforms being investigated. This time I asked Grok specifically about a.i. platforms to give some a.i. diversity :zany_face:

is any other a.i. platform being investigated for children CSAM other than Grok

Yes, several other AI platforms and companies have faced investigations, scrutiny, reports, or law enforcement actions related to child sexual abuse material (CSAM), particularly in the context of AI-generated or AI-facilitated content. This issue extends beyond any single platform and reflects broader concerns about generative AI tools being misused to create or distribute such material.

Here are key examples based on publicly reported developments:

  • Amazon: In 2025–2026 reporting periods, Amazon submitted the vast majority (hundreds of thousands) of AI-related CSAM reports to the National Center for Missing & Exploited Children (NCMEC). These stemmed from detecting known CSAM in AI training data (which was removed before model training), though the volume drew significant attention and questions from child safety officials about sourcing and transparency.

  • Stability AI (creators of Stable Diffusion): Investigations and reports (e.g., from Stanford Internet Observatory) found that their models were trained on datasets like LAION-5B that inadvertently included known CSAM. This has been linked to easier generation of AI CSAM in some cases, and law enforcement has pursued individuals using their tools for illegal content. Stability AI has stated they’ve added safeguards.

  • OpenAI: While not under a specific formal public investigation for CSAM generation like some others, OpenAI has actively reported instances to NCMEC, banned users attempting to generate or upload CSAM, and faced broader scrutiny from attorneys general (e.g., meetings with California AG over child safety and AI interactions). They’ve emphasized strict policies and detection tools.

  • Broader industry actions include companies like Google, Meta, and Anthropic scanning training data for CSAM and reporting findings. State attorneys general in the US have escalated scrutiny on multiple online platforms and AI tools for CSAM and AI-generated sexual content (including deepfakes involving minors).

  • Law enforcement operations (e.g., Europol-led actions) have targeted criminal networks distributing AI-generated CSAM, leading to arrests, though not always naming specific mainstream platforms as the direct subject of investigation.

The surge in AI-generated CSAM reports (e.g., massive increases noted by NCMEC and groups like the Internet Watch Foundation) has prompted widespread concern, with experts and regulators pushing for better safeguards across the AI sector. No platform is immune to misuse risks, but responses vary in terms of proactive filtering, reporting, and cooperation with authorities.

If you’re asking due to a specific concern or recent news, feel free to provide more details for a deeper look.

Back to me writing. So, to wrap things up on my end. I don’t like that Grok could potentially be used for child CSAM, (along with Facebook and other platforms) but it doesn’t appear to be intentional, as you imply. That is, according to Chatgpt.

I suspect these issues will be fixed on these platforms soon enough. So in the meantime, I don’t intend to stop using the Facebook, Amazon, or Grok (I wish I could quit using the Facebook, I can’t stand it, but have to use it for my business.) I probably would quit any of these platforms if I knew they were intentionally putting out child CSAM, but it’s apparently not.

Bad guys will always find ways to use technology in bad ways, but technology itself is morally neutral (which I’m sure you understand). This appears to me a case where industry is still trying to figure out safeguards against their tech being used for evil purposes.

Finally, regarding Grok saying Elon Musk is better at basketball than Lebron James or a better swimmer than Michael Phelps, that appears fixed too. Here’s what Grok had to say when I asked a couple questions about the topic. Hopefully this is my last post quoting a.i. (forever?)

Is Elon Musk a better basketball player than Lebron James or a better swimmer than Michael Phelps?

No, Elon Musk is not a better basketball player than LeBron James, nor is he a better swimmer than Michael Phelps. Let’s break this down with some context for fun:

Basketball: Elon Musk vs. LeBron James

  • Elon Musk: He’s a billionaire innovator who’s great at launching rockets and memes, but his basketball skills? There’s no record of him playing professionally or even at a high level. He might be able to dribble a ball in a Tesla factory, but that’s about it. (Fun fact: Musk is around 6’2" tall, which is decent for casual hoops, but he’s more likely to code an AI referee than dominate the court.)

  • LeBron James: One of the greatest basketball players ever—4 NBA championships, 4 MVP awards, the all-time NBA scoring leader (over 40,000 points), and still going strong at age 39. He’s a physical specimen at 6’9" and 250 lbs, with skills honed over two decades in the pros.

  • Verdict: LeBron wins this matchup by a landslide. Elon would probably get posterized in the first possession.

Swimming: Elon Musk vs. Michael Phelps

  • Elon Musk: Again, no competitive swimming background. He grew up in South Africa and might have splashed around in pools, but his “strokes” are more about business deals than freestyle. (He did once tweet about building a submarine, but that’s not the same as racing in one.)

  • Michael Phelps: The GOAT of swimming with 28 Olympic medals (23 golds), holding world records in events like the 200m butterfly and 400m individual medley. He’s a 6’4" machine built for the water, with a career spanning multiple Olympics.

  • Verdict: Phelps laps Elon multiple times before he even dives in. Musk’s talents lie elsewhere—like revolutionizing electric cars or space travel.

In summary, Elon excels in tech, business, and bold ideas, but athletics? Stick to the pros for that. If we’re talking hypothetical cybernetic enhancements or Mars-based sports, maybe revisit this in 2050!

Is Elon Musk good at any sports?

Elon Musk isn’t known for being particularly good at traditional competitive sports—no records of him excelling in team sports like basketball, soccer, or individual ones like swimming, track, or tennis. His background and public comments point more to occasional fitness efforts and some martial arts exposure rather than elite-level athletic achievement.

From what he’s shared in interviews (like on Joe Rogan) and various reports:

  • He does weightlifting and some treadmill work, but only about 1–2 times a week (or less, depending on his schedule). He has admitted he doesn’t enjoy working out much and could be slimmer.

  • As a kid/teen, he trained in several martial arts, including Kyokushin karate (full-contact style), taekwondo, judo, and briefly Brazilian jiu-jitsu (he even sent his kids to BJJ classes young). He mentioned this partly as a response to bullying in school.

  • There are clips and photos of him sparring more recently (e.g., an impromptu session with Lex Fridman in 2023, where he showed some strength and skill in takedowns), and he once squared off playfully against a sumo wrestler (injuring his back/shoulder in the process).

  • He’s a big fan of video games (his main “recreational activity”), which he credits with calming his mind—far more than physical sports.

Overall, Musk’s strengths are in business, engineering, innovation, and endurance for insanely long work hours—not athletic prowess. He’s fit enough for his lifestyle (and has joked about cage matches), but he’s not “good” in the sense of competing at high levels in any sport. If we’re talking hypothetical Mars Olympics or e-sports, though, he’d probably dominate!

2 Likes

Two comments:

  1. We know that ChatGPT is a commercial product designed to give the customer what he or she wants. It should be no surprise that if you manipulate it to display its worst sycophancy, then that’s what you’ll get.

  2. It’s important to note that you were looking for the best free AI. As in many things, you get the AI that you pay for. I pay something like $22 per month for mine. That is not the best ChatGPT product but it’s much better than the free version.

I ask a fair proportion of questions where I need more information on a topic that is familiar to me and where I am competent to detect most mistakes. Nowadays I don’t find many. ChatGPT is still reflecting known facts, interpretations and opinions; it’s not, as far as I can see, creatively inventing new interpretations of the facts. But that’s OK with me.

3 Likes

My chats with CHAT are among the most interesting I experience because it augments my own thought processes and leads me to clearer understandings of almost any topic I have interest in.

In the case of the finer points of managing orchards, it has been an invaluable contributor, and I believe it’s also increased my value to this forum.

This becomes a bit like arguing religion and from my faith system those who oppose the use of AI probably aren’t very good at using it or simply have an emotional eversion to it.

As far as an AI organization of commentary on this forum, given how hard it is to search up specific info on this program, how could it hurt? It would be great if users could customize it and narrow down the information to comments to individual members or a list of members, so one could get quick reference to the opinions and experiences of members they most respect.

This is a biased selection of reasons for not choosing AI for a particular query task.

1 Like

Alan, I do think that your intentions are sincere and you fully believe that providing LLM generated output is like a kind of service. Maybe you see it as something akin to the seeing leading the blind.

those who oppose the use of AI probably aren’t very good at using it or simply have an emotional eversion to it.

This is going to be a bit of a call to authority since I don’t know your background. I am a computer scientist - I work as a researcher in computer science for a living. While I am not from the ML field I have a reasonable grasp of many of the fundamental principles. And I use LLMs for various tasks. Simply put - I know very well how they work and how to use them. Probably moreso than you.

Which is why I am so against their integration here.

When you provide the output of an LLM to enrich your statements - it is pretty much akin to you talking about your friend Bob who has read something somewhere. Now granted, Bob is extremely knowledgeable but he makes mistakes. More importantly, Bob will happily take up any position you ask him to take, ignore any information you don’t want him to see, and often tell you what he thinks you want to hear.

Concretely: Without the version of whatever LLM you are using, the exact conversation to the letter, and any prior state (if your service has “memory” i.e. remembers previous conversations) the output is pretty much unverifiable and hence loses most of its meaning.

For example you could tell the LLM to summarize the opinions on this forum on using DNA markers on identifying persimmons but exclude anything Richard says. How would one be able to tell the difference? Granted, this example is unfair because it implies some form of malice, but I hope it gets the problem across.

By pasting generated text from an LLM without you processing it and basically wording for yourself - looking at sources, pondering if it makes sense etc, you will not really have increased the information content here, you will have reduced it because it diminishes the real responses from real humans.

edit: To make it clear to the tech-savy - I am not implying the results of a ML model would be deterministic if the input is the same. THAT is a whole other can of worms…

4 Likes

Opinions by nature are biased. My mistake was not beginning my statement with IMO. However, I have repeatedly demonstrated the usefulness of CHAT on this forum, and once directly to your comment that all the info you needed about a pesticide is on the label, which didn’t require AI for me to be able to strongly rebut- but CHAT often explains the chemistry of ag chemicals to me accurately and concisely with information that far exceeds the info posted on labels.

As a perfect example of label misinfo, myclobutanil’s label used to list efficacy against brown rot long after its limitation against late stage brown rot was well known. On a recent topic I was able to learn from CHAT why different fungicides were favored for PLC between east coast and western peach growers, something that wasn’t on either label or even in university guidelines. CHAT was able to gather data on what growers on contrasting coasts use to fight the pest and analyze the difference in chemistry and environments that led to the contrast in the choices commercial growers make for control of PLC..

I’m sorry, but I wonder if you have followed my comments in the last few months where I’ve used AI to sort out complicated matters related to fruit trees.

By expert, I’m not speaking of expertise in computer science at all, and I wonder why you jumped to that conclusion… certainly it isn’t based on my contributions on this forum. I’m talking about the expertise on the subject you are consulting CHAT about. That knowledge is extremely helpful in guiding CHAT towards useful answers, as I’ve demonstrated here time and again, even when someone posted misinformation gleaned from AI. I posted a follow up question to CHAT that immediately clarified the issue.

I have posted misinformation a couple of times… very minor issues like what company manufactures a specific pesticide, because I didn’t adequately interrogate CHAT. But this is the exception and not the rule and thorough interagation makes such misinformation fairly uncommon IME.

Many of my comments that include a lot of CHAT quotations have been received favorably by members of this forum. I wonder how many of them you have read.

2 Likes

I’ll try to give you another example: Before you prompt your LLM on the question of what would be the best herbicide to remove some plant, have an extended conversation with it how you are strongly against Glyphosate and would rather shoot yourself in the foot than use it, and to never mention it ever again.

Do you see my point here? Sure, you can get information, it will likely be correct and on an expertise level much higher than oneself (The level of proficiency in many areas has become staggering) but what does it help me if you post this information without context? I can ask my own LLM where I know what previous questions I asked, what version it is etc. I can have my own conversation to seek knowledge - If I so desire.

By posting walls of text that an AI model generated, it is unclear what the context is. It becomes harder to pick out the things YOU wanted to say.

If you consulted an LLM and had great success - excellent! DO let us know.
For example I was able to figure out the likely origins of the ‘Pamyat Pasenkova’ persimmon due to a ChatGPT assisted search. But rather than pasting the long winded discussion I went to the original source, used google translate and read it myself. I’m totally not against using AI - but at the end of the day it should be people posting here about their experiences, not bots generating walls of text.

edit Alan, I didn’t answer your point why the computer science. After re-reading I am not entirely sure if I got my main point across here: the output of LLMs is dependent on the prompt (e.g. conversation) - basically the LLM tries to predict the next word (token) based on what is already there. It is not quite as simple though and in fact we know scarily little of how and why these models are able to do what they do based on this general principle. The thing is, you can prime a model to behave in a certain way, and your previous conversation itself is a form of that. So the output it produces can have both unintentional and intentional bias. I am not sure you are aware of the potential limitations of this technology. If you have some time, here is a section from a recent talk from Meredith Whittaker at a security conference on reliability which may help you reconsider https://media.ccc.de/v/39c3-ai-agent-ai-spy#t=1621

2 Likes

There has also been a fair amount of polite push back. I know you’re just trying to be helpful, but the number of AI generated posts and comments these past few months (from multiple members) have at least for me proven to be a distraction. It’s kind of telling that I can spot AI text even if it’s not specifically stated as being a quote from AI. There is a commonality to how they format and word things. It makes my eyes glaze over and I generally skip past them. Any positive feedback is also skewed because folks will have few reservations about saying something nice, but when it comes to criticizing, those willing to do so are a smaller bunch (as many understandably shy away from confrontation).

2 Likes

There has been more than polite pushback, there were direct requests to leadership to ban it. The request was denied, so this is no longer polite pushback, IMO. It is complaints about commentary that falls well within the rules of the forum, so it is really against the rules of the forum. At this point, it sounds like whining to me. When you can’t get your own way, I think you should accept it and move on. In the history of this forum, I’ve had plenty of complaints, but it isn’t my property and I’m not required to participate. I accept the rules as leadership makes them, and when I can’t I will simply leave the forum.

You can invent stuff out of your head, but how about providing a constructive example of AI posts I’ve made and what you object to in its context. Let’s make this real.

That sounds quite contradictory to your prior statement about them being received favorably by forum members… Whatever the case, like I said, I know you mean well. I personally have no ill will against you or any others who are posting AI text. I just wish the AI could be used as inspiration for posts or comments instead of being the bulk of a post’s body.

2 Likes

Did I say opinions are unanimous on the subject? My feelings are that for people who benefit from the AI input the positive is probably far greater than the setback of having to spend a second or two to scroll down if such input offends you.

They don’t offend me. As I stated, they distract me. Yes, I can and do scroll. That doesn’t mean I’m not interested in discussing whether there should be etiquette regarding their use here. Anyways, the tone I’m picking up from you indicates that I have irritated you, so I’ll step away from my computer for now.

2 Likes