AI-assisted social media groups

Facebook is rolling out AI assistance for groups.

The admin of one of the groups I am in (for the healing modality VH) asked if we should use it. Most seemed to get caught up in the “answering questions” side of it, and it was rejected by a clear majority.

That’s fine. We can choose what we want for whatever reason we want or for no reason at all.

And yet, the reasons people gave didn’t make so much sense to me.

HOW WOULD IT WORK?

The first question we have to ask ourselves is: How would it work?

I don’t know but I have some (educated) guesses.

AI answers would be marked “AI answer” or similar.

The moderator(s) will have to approve any AI answer before it’s published.

The AI response would be based on the top past answers to similar questions. It would offer the essence of the most valued answers from within the same group. (This is similar to how some news sites, for instance the Norwegian Broadcasting Corporation NRK, use AI to summarize articles.)

This is all likely since it’s in Meta’s interest. They want to offer a service that makes sense to people and that would be reliable and useful.

THE QUALITY OF THE ANSWERS

Some were concerned about the quality of the AI answers. I understand that concern, and I don’t see it as a reason to reject it. Why not try first? If Meta wasn’t confident the AI could give good answers, they wouldn’t roll it out.

As mentioned above, the AI response would likely be a summary of the top answers to similar past questions.

The moderator(s) will approve the answer before it’s published.

It would be one of many voices and we, the group members, would add to it as we normally do.

One said that we would need to know that the AI answers would be “factual and accurate”. If that was the criterion, we would have to exclude humans from commenting in the group. What we humans say is often not all that “factual and accurate”. The group discussion on this very topic is an example since some of it didn’t seem grounded in reality.

PRIVACY CONCERNS

Some had privacy concerns. I understand those concerns, although it seems based on the assumption that there is privacy in the first place.

To me, it makes sense to assume that nothing on social media is private, including in groups.

AI would be the least of my concerns here. Most of the time, AI is a “black box” and we are not able to access the content of the neural network apart from through the usual interfaces. It’s not a database where you can go behind the scenes and look up info.

REJECTING WITHOUT KNOWING MUCH ABOUT IT

As so often, I see people rejecting a possibility without taking time to understand it.

In the discussion, not a single person asked how it would work. They didn’t seem interested in learning more before making a decision.

Many dermed to reject it based on assumptions picked up from movies and media hype. In reality, the workings of AI are pretty boring. It’s based on statistics and it’s not “intelligent”. We decide how and when to use it. As with any tool, it has strengths and limitations, and it’s useful in some situations and for some purposes.

A MISSED EDUCATIONAL OPPORTUNITY

Would I want AI assistance in this social media group?

The main reason to not adopt it is that it’s not really necessary. Members answer questions, often by referring to past discussions.

There are also some reasons to try it out.

It wouldn’t hurt. If it doesn’t work, we can just disable it.

It could be fun and spark interesting conversations.

It could make the job of the moderators and group members easier. Many questions are repeated, and the AI could provide the essence of the most valued answers from the past.

As group members, we would comment on, evaluate, elaborate on, and add to the AI answers.

In general, it would be educational, and highlight some of the strengths and weaknesses of AI.

To me, that’s a missed opportunity. And that’s fine since the group is not about AI. We can learn about that outside of this one group.

Note: There is a personal side to this for me. I often feel that when I share something I see as relatively informed and grounded, it’s overlooked. That happens in life as well as in these kinds of groups. It’s been a pattern for me my whole life, including in my birth family.

Image by me and Midjourney


INITIAL DRAFT

AI-ASSISTED SOCIAL MEDIA GROUPS

Facebook is rolling out AI assistance for groups. The admin of one of the groups I am in (a healing modality) asked if we should use it, and it was rejected by the clear majority.

That’s fine. We can choose what we want for whatever reason we want or for no reason at all.

And yet, the reasons people gave didn’t make so much sense to me.

Some had privacy concerns. To me, it makes sense to assume that nothing on social media is private, including in groups. It makes sense to assume nothing on the internet is private unless it’s your bank or government accounts. It makes sense to assume not even that is private. If you want something to be private, don’t share it online. (AI would be the least of my concerns here.)

Some had concerns about the “answering questions” side of the AI assistance. I assume AI answers would be clearly marked, and I also assume the moderator(s) have a say in how and when it happens and have the ability to check and filter. I also assume that the answers would be based on human answers to similar questions in the past. (Especially answers from moderators and highly liked/loved answers.)

They said AI can’t be trusted. Some said that they wouldn’t want AI answers unless you could be certain it is factual and accurate.

Again, that doesn’t quite make sense to me.

The AI answers would be based on the top answers in the past. The moderator(s) will likely approve the answer or not before it’s published. And if you have “factual and accurate” as a criterion, then you couldn’t allow humans to comment in the group. What humans say is often not all that “factual and accurate”.

Would I want AI assistance in this social media group?

I think it could be fun and spark interesting conversations. Just like they do in schools, we could comment on and evaluate the AI answers. It would be educational, and highlight some of the strengths and weaknesses of AI.

FRAGMENTS

FACTUAL ANSWERS?

Some had concerns about the “answering questions” side of the AI assistance.

I assume AI answers would be marked (“AI bot” or similar). I assume the moderator(s) have a say in how and when it happens and have the ability to check and filter. I also assume that the answers would be based on human answers to similar questions in the past. (Especially answers from moderators and highly liked/loved answers.)

They said AI can’t be trusted. Some said that they wouldn’t want AI answers unless you could be certain it is factual and accurate.

Again, that doesn’t quite make sense to me.

The AI answers would be based on the top answers in the past. The moderator(s) will likely approve the answer or not before it’s published. And if you have “factual and accurate” as a criterion, then you couldn’t allow humans to comment in the group. What humans say is often not all that “factual and accurate”.

REJECTING WITHOUT KNOWING MUCH ABOUT IT

As so often happens, I see people rejecting a possibility without knowing much about it.

They didn’t seem to understand how it would work. That it very likely would be based on top comments from the past, that the answers would be marked, and that the moderator(s) would likely have the opportunity to filter and approve the AI answers before they were published.

They seemed to be concerned about privacy when nothing is private in these groups to begin with.

They seemed to have other criteria for AI than they have for humans. Why would AI answers have to be “factual and accurate” when human answers often are not? In this case, the AI answers would be based on human answers, and likely the most valued and appreciated human answers from the past.

They didn’t seem to understand how it would work. It would very likely be based on top comments from the past. The answers would be marked. The moderator(s) would likely have the opportunity to filter and approve the AI answers before they were published.

They seemed to be concerned about privacy when nothing is private in these groups, to begin with.

They seemed to have other criteria for AI than they have for humans. Why would AI answers have to be “factual and accurate” when human answers often are not? In this case, the AI answers would be based on human answers, and likely the most valued and appreciated human answers from the past.

SECOND DRAFT

Facebook is rolling out AI assistance for groups.

The admin of one of the groups I am in (a healing modality) asked if we should use it. Most seemed to focus on the “answering questions” aspect, and it was rejected by a clear majority.

That’s fine. We can choose what we want for whatever reason we want or for no reason at all.

And yet, the reasons people gave didn’t make so much sense to me.

HOW WOULD IT WORK?

The first question is: How would it work?

I don’t know for certain but I have some guesses.

I assume AI answers would be marked “AI answer” or similar.

I assume the moderator(s) will have to approve any AI answer before it’s published.

I also assume that the AI response would be based on past discussions in the group on similar topics, and offer a summary of the most liked and appreciated answer(s).

FACTUAL AND ACCURATE?

One of the concerns voiced was the quality of the AI answers. I understand that concern, and also not quite.

Most likely, the AI answer would be a brief summary of the best answers to similar questions from the past.

The moderator(s) will approve the answer before it’s published.

It would be one of many voices and we, the group members, would add to it as we normally do.

One said that the AI answer would need to be “factual and accurate” before it should be allowed. If you have that as a criterion, then you couldn’t allow humans to comment in the group. What we humans say is often not all that “factual and accurate”. (As the group discussion on this topic clearly showed.)

PRIVACY CONCERNS

Some had privacy concerns.

To me, it makes sense to assume that nothing on social media is private, including in groups. In general, it makes sense to assume nothing on the internet is private unless it’s your bank or government accounts. It makes sense to assume not even that is private. If you want something to be private, don’t share it online.

AI would be the least of my concerns here.

REJECTING WITHOUT KNOWING MUCH ABOUT IT

As so often happens, I see people rejecting a possibility without knowing much about it.

They didn’t seem to understand how it would work and didn’t seem interested in learning more about it before deciding.

Some may have misconceptions about AI in general, likely from movies and media hype. In reality, AI is more boring than that. It’s based on statistics. It’s not “intelligent”. As with any tool, it’s useful in some situations and for some purposes.

AN EDUCATIONAL OPPORTUNITY

Would I want AI assistance in this social media group?

I think it could be fun and spark interesting conversations.

It could make the job of the moderators and group members easier. Many questions are repeated, and the AI would sift through similar discussions and provide the most valued answers from the past.

As group members, we would comment on, evaluate, elaborate on, and add to the AI answers.

In general, it would be educational, and highlight some of the strengths and weaknesses of AI.

To me, it’s a missed opportunity. And that’s fine since the group is not about AI. We can learn about that outside of this one group.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.