Friendica has several feeds, including "Latest Activity", "Latest Posts", and the local and federated network feeds. As far as I know, not one is generated through any sort of "engagement" algorithm like on Big Social. The closest is Latest Activity, which is nothing mysterious, it's exactly what it says on the tin: it shows the posts that have most recently been acted upon by people you follow. I wonder if this person might be seeing different trends based on who, globally, is in a waking time zone at the time.
Given the tone and tenor of that person's other posts, they seem quick to anger and judgment. I hope they find answers to their question, and also, thank you for alerting me to someone I prefer to defederate from.
@Spencer It's much easier to call a person crazy than to investigate their claim. I dare you to sign up with any pleroma account with more than 20 users and start making waves. I double dog dare you.
@Peter Weyand Apologies. I just forwarded your post because it looked as if you were searching for support for Friendica rather than Mastodon. I understand your point but don't share your problem here so I can't be of help myself. @Spencer @Friendica Support
@Kristian @VegOS I don't know which are bots and which aren't necessarily (chatGPT) and second, I don't know if I can see them from my feed or if friend's friend's friend's (to some nth degree) are bots.
I also believe that it is possible (and I believe that I've witnessed this happening) where if I block a bot then the server that originated the bot will detect this and in retaliation link to something else even worse. So for example, if I block a bot that comes up with memes related to climate change then my feed ends up being filled with neonazism subsequently. I don't know exactly how this occurs - it may be that malicious bots upvote certain content and downvote certain content so that when they're banned the feed just becomes polluted in another way. It may be that there are mechanisms in the offending server to message other servers to follow me and then subs
... mehr anzeigen
@Kristian @VegOS I don't know which are bots and which aren't necessarily (chatGPT) and second, I don't know if I can see them from my feed or if friend's friend's friend's (to some nth degree) are bots.
I also believe that it is possible (and I believe that I've witnessed this happening) where if I block a bot then the server that originated the bot will detect this and in retaliation link to something else even worse. So for example, if I block a bot that comes up with memes related to climate change then my feed ends up being filled with neonazism subsequently. I don't know exactly how this occurs - it may be that malicious bots upvote certain content and downvote certain content so that when they're banned the feed just becomes polluted in another way. It may be that there are mechanisms in the offending server to message other servers to follow me and then subscribe bots.
I can't say for certain the mechanism that is causing this - all I can say is that I can look at my feed and I can say for certain there are wild swings in the type of content that I'm seeing from banning suspicious meme driven content. I can also say that the content that I'm seeing comes in clustered "types" that match a certain set of psychological profiles for what a user would like to see. I believe this is done in order to either promote a certain belief system in order to either sell a product or get someone to vote or act in a certain way.
I'm saying that the Activity Pub software is constructed in such a way that the entire Global Feed system is broken to the point of uselessness because of chatGPT software and automated bots. There is no way to verify for certain on here who is a human being and who isn't (and I don't think that public key cryptography works any more given chatGPT can mimic human generated text).
Is this your problem? Probably not. Activity Pub is a W3 standard now, for better or worse. I'm just saying it doesn't work as advertised and it's obvious to anyone who uses this software that it's design is able to manipulate others.
I believe the user base is comprised of three types of people -
Those who know this. These people are making the bots in order to manipulate the user base and ultimately cash in or are social media types that know how to appease the new algorithm masters and say the right memes so that they can cash in. These are people that are intentionally manipulating people (that is controlling what information they see and thereby controlling how they think and act). These people are evil. I don't know any other word for it.
There are those that know this is going on but don't want to participate, don't know how to cash in, or both. We're the outsiders in a system that we see becoming worse than Twitter. You knew who the owners were, it weren't some shadowy cabal that would only come to light when the whole thing came crashing to the ground spectacularly (and it will). Some people have few friends and don't link to the global feed. Some people are treated like dupes or have their reputation smeared by people they annoy.
Again, go to any of the sites using the Pleroma software that has more than 20 people and look to see if the site isn't almost entirely bots. Is this your problem because you are running Friendica? It is if any of your users link to that site or have followers to the nth degree that link to that site. Behold the Glory that is Activity Pub!
And then, there's everyone else who has the intelligence of a hamster and personally I couldn't tell if their shitposting were different than a robot if I tried.
Take all this for what you will. If you want to call me a nutter that's fine, but if you don't take a hard look at Pleroma or how the Activity Pub software works then it's just name calling. My Global Feed is borked and it's because this software doesn't work as advertised.
@Peter Weyand As for I am concerned, I wouldn't call you a nutter but I can't ... really follow you here because I don't get your problem to be honest. I'm on Mastodon and Friendica. Most of my interactions happen either with the local timeline or my "friends" feed, in case of Friendica even more so with a group of preferred people I enjoy interacting with on a daily basis. Haven't really looked outside this scope because I never saw the need for that, except while searching for some particular specific topics (which is a thing of its own, given the Fediverse still lacks a reasonable way of searching content). So personally, a messed-up Global feed isn't really _my_ problem. Rudely speaking, unfiltered global feed has been next to unusable on most of the instances I've seen so far, and most likely a "global feed" as in "show me _everything_ there is" is unusable in every medium of sufficient size. I however see three things here:
(a) Size. One of the frequently-ann
... mehr anzeigen
@Peter Weyand As for I am concerned, I wouldn't call you a nutter but I can't ... really follow you here because I don't get your problem to be honest. I'm on Mastodon and Friendica. Most of my interactions happen either with the local timeline or my "friends" feed, in case of Friendica even more so with a group of preferred people I enjoy interacting with on a daily basis. Haven't really looked outside this scope because I never saw the need for that, except while searching for some particular specific topics (which is a thing of its own, given the Fediverse still lacks a reasonable way of searching content). So personally, a messed-up Global feed isn't really _my_ problem. Rudely speaking, unfiltered global feed has been next to unusable on most of the instances I've seen so far, and most likely a "global feed" as in "show me _everything_ there is" is unusable in every medium of sufficient size. I however see three things here:
(a) Size. One of the frequently-announced thing is that the Fediverse should grow around small instances, in best cases individual instances. Knowing this is difficult, this might fix your problem seen on instances > 20 people. Get yourself a small crowd of people you trust and roll your own, knowing to block or defederate servers that cause havoc or are annoying to you and yours.
(b) ChatGPT and recognizing ML generated content. Sure, I think it's more than likely that ChatGPT and similar tools could be "weaponized" in such a way. This feels threatening, but I don't see this to be something a protocol such as ActivityPub could ever prevent. Maybe you have some suggestions on how to handle that?
(c) I don't get that last part of your messages ("... it's because this software doesn't work as advertised"). What exactly do you mean by that?
@Kristian You can search by following #hashtags. I have no problem with my global timeline on Friendica, as the accounts I follow (about 500) and my followed #hashtags (about 50+) dominate it. Of course there is other stuff, but I do not mind. I still have not all the news, I had on Twitter, which I left. I joined the Fedi about 10 years ago. Fedi was for conversation, Twitter was for news. But I feel I have replaced Twitter for about 90% ATM.
@VegOS I think it greatly differs. The Friendica instance I am on has the least bad public timeline. Most Mastodon instances I've encountered are noisier, and some of the Pleroma or Misskey systems are drastically worse (in terms of political extremism / hatespeech, disputable/untagged nsfw content, conspiracy theory stuff and all the like). But same here, I don't really care or need that. Following hashtags, using the feed containing all the status updates my contacts posted and the like are more than enough to go with. But... the only thing is: Hashtag search is tedious and limited. I get the rationale why, in example, Mastodon doesn't want to do a federated full-text search but I still think the search capabilities on the Fediverse leave a lot to be desired.
@Kristian I even follow #hashtags which' search yielded no result. That's a kind of catching net for me. It works. For the other stuff (in terms of ...), I block rubbish. I had to do the same on Twitter. Thanks for your estimation of Pleroma. I heard similar of Misskey myself. I think Friendica instances differ a bit from each other. I changed to another and the timelines are not the same, at least the timing is different.
I absolutely can confirm Peter's claim, same over here.
Not going to discuss this, much less with some maybe bot that's set to make me loose time or energy.
The issue ends up on the instance level, big instances, no check on the users and abusers .. bad. From that perspective @ Eugen has or might have some issues because the biggest instances are build by him. I recon that's good for several reasons, like testing and scaling, but if they become polluters, I'm going to do the same with them than I did with some chatBOT that just ran out of code when I called "him" out.
The world we live in is not Eugen's fault.
If this goes on I'll start to publicly call out sites that apparently are the worst representatives as unsave, block them and ask my respective admins and community to block them to.
Those are our tools and we'll have to work it out.
Anyone really believes "they" will allow us to happen just like that?
Not going to discuss this, much less with some maybe bot that's set to make me loose time or energy.
This is an interesting take that made me think very much. Maybe that's a dimension of the whole ChatGPT and ML "mess" we didn't even consider so far - does the mere existence of these tool also call for a new line of "defense" in our reasoning and communication, claiming we can't or won't try clarifying our views to each other because there's a legit assumption we're "just" discussing with a bot? That's quite scary.
That aside, well, I agree on that last part: Those are our tools and we'll have to work it out. Following Peters statement, I somehow get the feeling that there's an assumption of possibilities offered by ActivityPub which the protocol simply can't provide, and most likely both because it was never mean
Not going to discuss this, much less with some maybe bot that's set to make me loose time or energy.
This is an interesting take that made me think very much. Maybe that's a dimension of the whole ChatGPT and ML "mess" we didn't even consider so far - does the mere existence of these tool also call for a new line of "defense" in our reasoning and communication, claiming we can't or won't try clarifying our views to each other because there's a legit assumption we're "just" discussing with a bot? That's quite scary.
That aside, well, I agree on that last part: Those are our tools and we'll have to work it out. Following Peters statement, I somehow get the feeling that there's an assumption of possibilities offered by ActivityPub which the protocol simply can't provide, and most likely both because it was never meant to do so and because these are social not technical issues and need to be addressed on a social not technical level. So far, I have yet to see a valid solution for all this "trust" issue that works without having met a human being "in person" once and exchanged some sort of "credential" that allows for re-recognizing them online as well. But hasn't it been like this ever since? Was that different back then in the age of mailing lists or news groups? It doesn't seem new, just more ... difficult with the arise of ChatGPT et al...?
What you're saying doesn't make sense to me. The promise of Activity Pub is that there should be chat rooms of less than 20 people. Ok...isn't that what a websocket is?
Activity Pub is premised on this. Client talks to server. Server sockets to several other clients. Server also connects to other servers (Activity Pub!) to distribute content, which then socket to their own clients. I'm saying this is broken, because if any of the friends that I connect to outside of my server then connects to a server that is bot infested then my global feed is filled with bots which show misinformation.
Ok, so I'll grant. If I'm on server A and some other guy is on server B I can connect to him and we can talk and I can see his posts. But I can't "follow" him, because I don't know that what he follows or those he follows to the nth degree aren't bots. S
What you're saying doesn't make sense to me. The promise of Activity Pub is that there should be chat rooms of less than 20 people. Ok...isn't that what a websocket is?
Activity Pub is premised on this. Client talks to server. Server sockets to several other clients. Server also connects to other servers (Activity Pub!) to distribute content, which then socket to their own clients. I'm saying this is broken, because if any of the friends that I connect to outside of my server then connects to a server that is bot infested then my global feed is filled with bots which show misinformation.
Ok, so I'll grant. If I'm on server A and some other guy is on server B I can connect to him and we can talk and I can see his posts. But I can't "follow" him, because I don't know that what he follows or those he follows to the nth degree aren't bots. So what? I'm supposed to use a local feed of known users and follow their stuff only? How do I discover other people's content unless I use the regular old internet or meet someone in person?
By work as advertised I mean there should be a way to use the global feed in Friendica (or Activity Pub generally). As far as playing wack-a-mole with every instance of software that comes up by adding it to a ban list, that's like asking a website owner to add every website in the world to a robots.txt file or they give their email address and cell phone number to everyone that goes to their website.
As far as how to fix this? Oh geez, I don't know that it's possible at this point. I've thought of complicated ways of trading public keys layered on top of the Activity Pub protocol, but there would have to be a way of verifying that someone wasn't a chatGPT bot and I don't think that's currently possible. I think public key encryption over the internet is broken. And it's not a matter of "I know you're not a bot and you know I'm not a bot". Activity Pub is made so "I need to know that you're not a bot and you don't follow bots and they don't follow bots and then don't..."
What Activity Pub to me is right now is a websocket client with a way of connecting to other websocket clients and a button next to it that says "Global Feed" which instead should say "Here there be Dragons".
@Peter Weyand Hmmm, I'm still a tad lost, not sure whether I'm too blind to see the obvious, as I feel there's quite a mix of technological and social aspects in your posts. Is your core issue that you have content and people around who you don't trust, who you don't know whether or how they are trustworthy or just manipulative? It feels a bit like this, but I want to make sure before thinking any further... @VegOS
@Peter Weyand @Kristian @VegOS Friendica has a setting to let you see only conversations started by your follows, so that you aren't exposed to the conversation from accounts you don't follow which your follow reply to.
Is this person even real? If so, is this their account or has it been created by someone else or a bot (without their consent)? If so is there content feed being created by a chatBOT or not? If not then are all the replies to this thread created by content bots or not? If the content is created by comment bots what is the bot maker's intent?
Question 1 may be verifiable. Question 2 maybe not. Question 3 they may actively lie to you. Question 4 who knows. Question 5 falls into the realm of speculation
Is this person even real? If so, is this their account or has it been created by someone else or a bot (without their consent)? If so is there content feed being created by a chatBOT or not? If not then are all the replies to this thread created by content bots or not? If the content is created by comment bots what is the bot maker's intent?
Question 1 may be verifiable. Question 2 maybe not. Question 3 they may actively lie to you. Question 4 who knows. Question 5 falls into the realm of speculation of a psychological manipulator.
Given that I've replied to this thread, does that mean that every chatbot in the thread now sees that I'm a dupe that took the bait? Does this mean that I'm added to the "mark" category and more such stupidity will be in the global feed. View my comment here (libranet.de/display/0b6b25a8-1…) and view all the rest of the comments in the thread (mastodon.social/@Sheril/109667…). Is no one responding because I'm being a dick, or are all of these comments one line hot takes from bots attempting to upvote someone on PBS who wants people to watch her show and get her post and her pretty face in as many global feeds as possible?
Grandma sitting at home thinks this woman is popular on the internet and will watch her show. Idiots with guns will think that their chosen shiboleth is popular on the internet with "cool" reactionary fascists. Everyone else will tune out and realize the game is rigged - that's not good when you're all out here trying to "remake" the internet. The machine the stupid monkeys that discovered fire decided to hook their entire culture to.
And this is an easy example. Shit, for all I know this woman's trying to prove a point by being this transparent. I've speculated above that the way that bots interact with posts downstream from me can manipulate the social graph of k-means clustered social interactions, but I'd have to dive into the Activity Pub protocol to see how they're doing this or the number of bots required.
This isn't a "pretend" problem. This is a "there is no way to tell information from disinformation" problem. The Kremlin no longer has to pay some group of underpaid interns in Latvia to pretend to be angry Americans, now you can automate the invective.
This is serious. People act in real life based on what they read and how they interact with the internet. Many people change their behaviors just slightly enough that in the aggregate it can sway elections, cause products to be bought, or wars to be fought. If you think you're immune, boy, have I got news for you.
If you're bots, I don't particularly care. I'm writing this in large part as a record of rhetoric that I can use for myself later, much like an interactive diary. What you're building has no safeguards for how *people* work and it's going to cause a total shitstorm when it breaks.
@Peter Weyand Hmmmm. There are valid questions in there, but I wonder whether you can name any kind of medium available to a large audience that has solved these? TV? Printed newspapers? Any other digital format? And again, what would be your suggestions to fix that, on any particular level? What means do you see to make a system prone to intentional abuse in such a way? The best and maybe only approach I see, at the moment, is trying to educate people to think twice, question things, try to figure out - and at some point be aware that these kinds of abuse are possible, always, just looking at wartime propaganda and manipulation in "old-fashioned" media channels. That's tough, but I really don't have any better idea.
In a media empire there is an editor and someone who is ultimately responsible for running the business. The buck stops at someone. There are W2s so you know which reporter is being paid to write what story.
This is an entirely other beast. This will end badly.
@Peter Weyand So trust is all about money then? Would you trust the "owner" responsible for running the business? Why should they lie to you any more or any less than anyone else out there? And worse of all: Maybe they're not even lying or trying to manipulate but just are totally _convinced_ of a certain world view which should be rejected by all means? And we're really talking _mass media_ here, media even reaching people who don't have access to computers or the internet. That seems a gross oversimplification of things, and at least to me it seems to boil down to the same question: Who do you trust and why - and how to detect manipulation in reasonably complex media structures. I don't have a solution, but I don't see any on your side, either. But, as this is where we started, I still fail to see what specifically on level of a technical protocol could be done to fix this.
Although the description of the problem itself is true, and the level of language quite outstanding, we are just facing same problems, same shit, in a different way.
For example, algos to feed us what we want to see (or the contrary) are normal on the other sites. So that "lady" normally just sell's you some ad's or lure you to sex.com. AI as in the means of understanding and semantics interpretation is a real threat and danger to society, I guess much worse in wallet gardens as out here where we are, where we might uncover them or evade them easier.
btw friendica has also a positive server list as restriction too, that's most likely one of the features to have in mind also.
People aren't making these botnets for no reason other than the lolz - OK maybe some are. What the majority of them are doing is they are attempting to convert attention into money, by cornering the market.
You don't know who they are. With a newspaper you do know who they are. That's the difference.
@Georg aus Bakum Sagen wir's so: Das Grundproblem (ich kann in den seltensten Fällen Informationen, die jemand postet, verifizieren) scheint mir unverändert zu bestehen seit Anbeginn der Massenmedien. Möglicherweise bekommt das mit ChatGPT oder anderen KIs eine zusätzliche Problemdimension, aber die sehe ich nicht nur im Fediverse, wenn ich ehrlich bin. Die Diskussion über Deep Fakes, über gefälschte, verfälschte, vollständig KI-erzeugte Videos und Bilder ist ja auch schon etwas älter. Aus meiner Sicht hat das Fediverse genau so wenig wie irgendein anderes Netzwerk dort Vor- oder Nachteile. Vielleicht(!) ist es im Fediverse mit geeignetem "Schnitt" von Instanzen (eher mehr kleine Systeme, Förderation nur mit ausgewählten Servern, mehr oder weniger rigoroses Blocken / Stummschalten) mehr technisches Werkzeug gegeben, um im Zweifelsfall handlungsfähig zu sein, aber die Frage, wer mir in einem Chat gegenübersit... mehr anzeigen
@Georg aus Bakum Sagen wir's so: Das Grundproblem (ich kann in den seltensten Fällen Informationen, die jemand postet, verifizieren) scheint mir unverändert zu bestehen seit Anbeginn der Massenmedien. Möglicherweise bekommt das mit ChatGPT oder anderen KIs eine zusätzliche Problemdimension, aber die sehe ich nicht nur im Fediverse, wenn ich ehrlich bin. Die Diskussion über Deep Fakes, über gefälschte, verfälschte, vollständig KI-erzeugte Videos und Bilder ist ja auch schon etwas älter. Aus meiner Sicht hat das Fediverse genau so wenig wie irgendein anderes Netzwerk dort Vor- oder Nachteile. Vielleicht(!) ist es im Fediverse mit geeignetem "Schnitt" von Instanzen (eher mehr kleine Systeme, Förderation nur mit ausgewählten Servern, mehr oder weniger rigoroses Blocken / Stummschalten) mehr technisches Werkzeug gegeben, um im Zweifelsfall handlungsfähig zu sein, aber die Frage, wer mir in einem Chat gegenübersitzt und mit welcher Motivation derjenige interagiert, kann Technik, glaube ich, nicht beantworten, zumindest nicht in einer Weise, die wir alle wollen (die andere Richtung wäre eine ganz hart durchgezogene digitale Identität, die in allen Fällen absolut plausibel klar macht, wer "wer" ist, wer welche Dinge wann postet - also der vollständige Verlust von Anonymität).
Natürlich ist Manipulation mithilfe von Bots (künstlich eingerichten Accounts) und KI auch hier nicht ausgeschlossen, je mehr das Fediversum an Bedeutung gewinnt. Aber die Manipulation von Timelines wird ja bislang von den Protagonisten des Fediversum ausgeschlossen, entsprechende Algorithmen, die Beiträge je nach Inhalt mehr oder weniger sichtbar machen, gäbe es nicht. In diesem Beitrag wird allerdings das Gegenteil behauptet.
@Georg aus Bakum Das ist tückisch, weil die Verwendung des Wortes "Algorithmus" in diesem Kontext meistens problematisch bzw. missverständlich ist. Im "Allgemeinen" meint man hier, wenn man "keine Algorithmen" sagt, das Fehlen der Logik bei Twitter, Facebook, Instagram, ..., die Beiträge abhängig von der Menge der Interaktionen oder anderer obskurer Metriken nach oben spült und andere versteckt. Das Fediverse hat _selbstverständlich_ auch Algorithmen, weil Computer ohne Algorithmen nicht arbeiten können. Die Timeline ist chronologisch sortiert - Sortierung ist die erste grundlegende Aufgabe, anhand derer Informatik-Studenten gemeinhin lernen, was Algorithmen überhaupt sind. Chronologische Sortierung _ist_ ein Algorithmus. Der Art und Weise, wie Systeme miteinander förderieren, liegt ein Algorithmus zugrunde, siehe etwa hier:
... mehr anzeigen
@Georg aus Bakum Das ist tückisch, weil die Verwendung des Wortes "Algorithmus" in diesem Kontext meistens problematisch bzw. missverständlich ist. Im "Allgemeinen" meint man hier, wenn man "keine Algorithmen" sagt, das Fehlen der Logik bei Twitter, Facebook, Instagram, ..., die Beiträge abhängig von der Menge der Interaktionen oder anderer obskurer Metriken nach oben spült und andere versteckt. Das Fediverse hat _selbstverständlich_ auch Algorithmen, weil Computer ohne Algorithmen nicht arbeiten können. Die Timeline ist chronologisch sortiert - Sortierung ist die erste grundlegende Aufgabe, anhand derer Informatik-Studenten gemeinhin lernen, was Algorithmen überhaupt sind. Chronologische Sortierung _ist_ ein Algorithmus. Der Art und Weise, wie Systeme miteinander förderieren, liegt ein Algorithmus zugrunde, siehe etwa hier:
Das sind Algorithmen, die haben Randbedingungen und Eigenschaften, und wenn ich die Eigenschaften kenne, kann ich manipulieren, indem ich mit diesen Eigenschaften arbeite. Beispielskonstrukt: Ich habe eine chronologische Timeline, also werden Posts nach Datum absteigend einsortiert. So eine Timeline kann ich potentiell "fluten", indem ich in loser Folge extrem viel poste - dann sehen Leute, die mich in ihrer Home-Timeline oder meine Instanz in ihrer förderierten Timeline haben, quasi nur das, was ich will, oder müssen extrem viel scrollen (Algorithmen bei Twitter, Instagram, Flickr ... unterbinden das ganz gut). Oder: Ich erzeuge eine Instanz, auf der nur Unsinn gepostet wird (etwa von Bots, die in hoher Frequenz viel Content generieren). Wenn ich dort einen plausiblen Nutzer bekomme, dem jemand anders auf einer anderen Instanz folgt (oder mehrere Nutzer auf anderen Instanzen erzeuge, die meinem Nutzer auf meiner bösen Instanz folgen), dann wird die förderierte Timeline der anderen Instanz auch diesen ganzen Kram anzeigen. Das ist vereinfacht abstrakt formuliert; ich hab keine Idee, ob und wie die gängigen Implementationen (Mastodon, Friendica, ...) das technisch irgendwie unterbinden. Lösungen wären etwa: Suspekten Nutzern oder Nutzern von suspekten Instanzen nicht folgen. Oder diese Instanz stummschalten / blocken. Oder erkennbare Accounts, die zu viel posten, stummschalten. Oder ...? Herausforderung dabei: Erkennt man, ob der Content, der gepostet wird, "Unfug" ist oder nicht? Ich glaube, Kernprobleme sind hier grundsätzliche Erwartungshaltung an die zugrundeliegende Technologie und Systeme sowie schwierige Vorstellung vom Begriff "Algorithmus". In den Beiträgen bzw Fragen hier scheinen mir sehr viele Dinge durcheinanderzugehen, die irgendwie miteinander interagieren, aber potentiell verschiedene Probleme sind.
Das Blocken von Instanzen ist ein schlechtes Mittel. Das Blocken und/oder Stummschalten einzelner Accounts wird schnell zur Mammutaufgabe bei steigenden Nutzerzahlen, so handhabe ich das aber derzeit...
@Das Leben ist schön Grundsätzlich wird es dort bessere Lösungen als bisher brauchen. Das Problem ist auch nicht gänzlich neu, siehe etwa pod.geraspora.de/posts/1235904… , das ist zwei Jahre alt (und das Problem kennen wir ja auch von E-Mail, auch wenn das vielleicht leicht abweicht). Ich vermute mal, es ... läuft tatsächlich viel auf die Frage hinaus, wie man mit dem System arbeitet. Das meinte ich oben in dem Thread - ich nutze die förderierte Timeline selten bis nie, habe Gruppen (auf Friendica) bzw. Listen (auf Mastodon) für "wichtige" Nutzer, deren Content und Themen mich interessieren, sortiere gelegentlich Accounts, denen ich folge, aus. Vielleicht löst sich einiges darüber ja über das Bewusstsein, dass die globale Timeline eben "nur" automatisch förderiert und per se damit unsortiert und unmoderie
... mehr anzeigen
@Das Leben ist schön Grundsätzlich wird es dort bessere Lösungen als bisher brauchen. Das Problem ist auch nicht gänzlich neu, siehe etwa pod.geraspora.de/posts/1235904… , das ist zwei Jahre alt (und das Problem kennen wir ja auch von E-Mail, auch wenn das vielleicht leicht abweicht). Ich vermute mal, es ... läuft tatsächlich viel auf die Frage hinaus, wie man mit dem System arbeitet. Das meinte ich oben in dem Thread - ich nutze die förderierte Timeline selten bis nie, habe Gruppen (auf Friendica) bzw. Listen (auf Mastodon) für "wichtige" Nutzer, deren Content und Themen mich interessieren, sortiere gelegentlich Accounts, denen ich folge, aus. Vielleicht löst sich einiges darüber ja über das Bewusstsein, dass die globale Timeline eben "nur" automatisch förderiert und per se damit unsortiert und unmoderiert ist. Bleibt noch der Umstand, dass durch Förderation mit potentiell "bösen" Instanzen Netzwerklast und Bandbreite entsteht. Das ist ärgerlich, und das braucht perspektivisch vielleicht auch noch mehr Lösungen analog der Spam- und Junk-Filterung bei E-Mail (Blacklisting, Greylisting, Austausch von Informationen über "garstige" Instanzen, ...). Oder eben das Unterbinden der Förderation der gesamten öffentlichen Timeline von Instanzen, sondern nur Förderation der Posts von Nutzern von dort, denen jemand bei mir lokal folgt. Weiß aber nicht, ob das (schon/technisch überhaupt) geht.
Danke Kristian, du hast das Problem verstanden. Ich sage nicht, dass Friendica oder Mastadon einen Algorithmus verwenden. Da Activity Pub es mehreren Servern ermöglicht, sich zu verbinden, können Botnetze stattdessen die Art und Weise nutzen, wie sich Server verbinden, um Benutzer zu manipulieren. Es sind die Botnets (und diese Bots), die ihre eigenen internen Algorithmen zur Manipulation des Netzwerks haben. Das ist in vielerlei Hinsicht *schlimmer*, denn statt einer zentralen Instanz, die bekannt ist und kritisiert werden kann, gibt es jetzt mehrere Schatteninstanzen, die den Informationsfluss kontrollieren, die unbekannt sind. Offensichtlich ist Geld im Spiel. Ich weiß nicht, was die L
Danke Kristian, du hast das Problem verstanden. Ich sage nicht, dass Friendica oder Mastadon einen Algorithmus verwenden. Da Activity Pub es mehreren Servern ermöglicht, sich zu verbinden, können Botnetze stattdessen die Art und Weise nutzen, wie sich Server verbinden, um Benutzer zu manipulieren. Es sind die Botnets (und diese Bots), die ihre eigenen internen Algorithmen zur Manipulation des Netzwerks haben. Das ist in vielerlei Hinsicht *schlimmer*, denn statt einer zentralen Instanz, die bekannt ist und kritisiert werden kann, gibt es jetzt mehrere Schatteninstanzen, die den Informationsfluss kontrollieren, die unbekannt sind. Offensichtlich ist Geld im Spiel. Ich weiß nicht, was die Lösung ist (oder ob es eine gibt), und weil Activity Pub so beliebt wird, mache ich mir Sorgen, dass diese Bots das Verhalten der Menschen manipulieren können. Aber das *ist* ein Problem. Vielen Dank, dass Sie die Erklärung auf Deutsch an Ihre deutschen Kollegen geschrieben haben - Sie haben meinen Punkt genau erklärt.
@peter_weyand @AGVEiz @helpers das glaube ich so nicht, dass botnetze das Fediversum kapern. Klingt für mich wie eine Verschwörungstheorie, dass dies sogar gefährlicher als bei zentralen proprietären Sozialen Netzwerken sei.
@Georg aus Bakum Ich teile die letzte Einschätzung auch nicht, glaube aber, dass es eben eine rationale, nüchterne Sicht auf die Dinge gibt. Das Fediverse ist kein "silver bullet", ActivityPub auch nicht. Zentrale Systeme und verteilte Infrastrukturen haben aus vielen Perspektiven heraus sehr unterschiedliche Angriffsflächen und denkbare Bedrohungsszenarien. Das jetzige Fediverse versucht eine Reihe der Probleme zu korrigieren, die wir an den zentralen Systemen gefunden haben - vor allem etwa den Umstand, dass es eine einzelne, nur mäßig transparent arbeitende Struktur gibt, von deren Goodwill und Entscheidungen abhängt, wer kommunizieren kann und wer nicht, wer geblockt wird und wer nicht, ... . Das löst ein dezentrales System, indem es diese zentrale Instanz aufbricht, diese Entscheidungen auf viele kleine Schultern verteilt und beispielsweise einzelnen Communities die Möglichkeit einräumt, sich ihre eigenen Spaces mit eigenen
... mehr anzeigen
@Georg aus Bakum Ich teile die letzte Einschätzung auch nicht, glaube aber, dass es eben eine rationale, nüchterne Sicht auf die Dinge gibt. Das Fediverse ist kein "silver bullet", ActivityPub auch nicht. Zentrale Systeme und verteilte Infrastrukturen haben aus vielen Perspektiven heraus sehr unterschiedliche Angriffsflächen und denkbare Bedrohungsszenarien. Das jetzige Fediverse versucht eine Reihe der Probleme zu korrigieren, die wir an den zentralen Systemen gefunden haben - vor allem etwa den Umstand, dass es eine einzelne, nur mäßig transparent arbeitende Struktur gibt, von deren Goodwill und Entscheidungen abhängt, wer kommunizieren kann und wer nicht, wer geblockt wird und wer nicht, ... . Das löst ein dezentrales System, indem es diese zentrale Instanz aufbricht, diese Entscheidungen auf viele kleine Schultern verteilt und beispielsweise einzelnen Communities die Möglichkeit einräumt, sich ihre eigenen Spaces mit eigenen Regeln zu schaffen, die sie vor niemandem begründen müssen. Sofortiger Nachteil dieser Übung: Wenn beispielsweise eine große Instanz wie chaos.social entscheidet, eine andere große Instanz wie mastodonten.de (wie vor einigen Jahren passiert) zu blocken, macht das für sehr viele Nutzer kurzfristig die Kommunikation kaputt. Ich glaube nicht, dass das Fediverse mehr oder weniger gefährlich ist - aber ich denke, wir tun gut daran, auch kritisch zu diskutieren und kritische Hinweise konstruktiv zu behandeln. Man sollte auch nicht vergessen: Am Ende des Tages war das Fediverse vor kurzem im Vergleich zu Plattformen wie Twitter oder Facebook eher ... klein. Mit der Größe kommen ganz sicher neue Herausforderungen.
Interessant etwa auch: jwz.org/blog/2022/11/mastodon-… : Wenn ich einen Link auf eine Website ins Fediverse poste, werden die Instanzen, die diesen Link sehen, den Server, auf den verlinkt wird, kontaktieren. Konsequenz: Habe ich einen Link, der in großem Umfang förderiert wird, greifen plötzlich potentiell ein paar tausend Instanzen völlig unkoordiniert auf den Server zu, auf den der Link zeigt. Wenn der Link auf einen Webserver auf einem RaspPi hinter dünner Leitung oder ein System wie solar.lowtechmagazine.com/ zeigt, kann das schon echt Stress bedeuten - oder umgekehrt die Anforderungen an Server- und Netzwerk-Ressourcen erhöhen, was wir aus verschiedenen Gründen (Stichwort mindestens Energiesparen) eigentlich nicht wollen.
Oder aber (hab den Link grad nicht mehr zur Hand) die Bitte eines Mastodon-Nutzers, große Videos bitte nicht über Mastodon zu teilen, weil das zur Konsequenz hat, dass die Videos (wie auch Bilder) vielfach zwischen Instanzen übertragen, lokal gespeichert werden und damit die Systeme voll-laufen lassen.
Das beides sind schon Dinge, die weder überraschen noch erschrecken, die aus meiner Sicht auch nicht grundsätzlich schlimm oder unbeherrschbar sind - aber eben Themen, an denen "zentrale" Systeme gewisse Vorteile haben können und mit denen sich die Fediverse-Community vermutlich irgendwann irgendwie beschäftigen will. Und die Frage, inwieweit Botnetze irgendwelchen Stuss tun können, indem sie die Systeme mit Content fluten, den man nicht will und maschinell schlecht filtern kann, wird möglicherweise auch dazugehören - siehe etwa auch die Qualität, die manche Spam- und Phishing-E-Mails dieser Tage haben und die so "gut" sind, dass diese Nachrichten selbst durch gute automatische Filter rutschen und Menschen Geld an Unbekannte überweisen.
du beschreibst andere Probleme, die es bei dezentralen Netzwerken gibt. Da stimme ich dir absolut zu! Ein Schwachpunkt ist die Moderation, die von Instanz und Instanz sehr unterschiedlich sein kann und dazu führt, dass man Inhalte nicht stehen kann, die man vielleicht gerne sehen würde, aber nicht sehen kann, weil komplette Instanzen oder einzelne Accounts ausgesperrt werden.
Es kann auch passieren, dass man selber gesperrt wird, weil der Moderator anderer Meinung als man selber ist. Die mögliche Zensur, die keiner einheitlichen Grundlage folgt, ist für mich das größte Problem hier. Es hängt also von der Wahl der Instanz ab, wie viel man sehen kann. Auf der anderen Seite wird tabuisiert, dass über dezentralen Netzwerken durchaus unangemessene Inhalte (Nazi, Gewalt, Hardcore Poro, Reichsbürger etc) geteilt werden.
Das ist dann eben nur einen Klick weiter, auf einer ausgesperrten Instanz. Peertube wimmelt geradezu davon. Das ist ein anderes Problem, als Weyand beschrieb. Natürlich hat er in gewisser Weise sogar Recht, dass es auch Accounts geben könnte, die hier mit künstlicher Intelligenz betrieben werden. Aber dass die das Fediversum kapern, halte ich für eine Verschwörungstheorie.
Letztendlich liegt es in der Verantwortung eines jeden Einzelnen, kritisch zu überprüfen, ob die Inhalte hier wahr sind. Aber das gilt grundsätzlich für alle Inhalte im Internet.
@Georg aus Bakum Ja, mindestens diesen Punkt sehe ich eben auch. Es wird darauf hinauslaufen, kritisch zu bewerten, was man liest, und das wird umso schwieriger, je "besser" eventuelle gefälschte Meldungen oder Inhalte sind - sowohl des "Formalen" wegen (weil es mit ChatGPT und Freunden viel leichter ist, "plausibel" aussehende Texte und Inhalte zu erzeugen), aber auch der Komplexität wegen (weil ich viele Behauptungen, Theorien, Thesen, die so durch die Welt wuseln, als Normalsterblicher ohne Expertise auf diesem speziellen Gebiet immer nur bis zu einem bestimmten Punkt werde nachprüfen können). Das sehe ich generell als nix, was Technik irgenwie lösen oder verbessern könnte.
Richtig! Aber dieses Phänomen ist kein spezielles Problem dezentraler Netzwerke sondern ein grundsätzliches Problem jedes Netzwerks, also des kompletten Internets.
@Georg aus Bakum Ja, klar sind das verschiedene Probleme.🙂 Mir ging es eben auch nur um den Punkt: Ich kann bei Vergleichen zwischen zentralen und dezentralen Systemen (oder allgemein bei technischen Lösungen, die verschiedenen Paradigmen folgen) ein "besser" oder "schlechter" selten absolut bestimmen, sondern meistens nur in bezug auf bestimmte Anforderungen und Erwartungshaltungen, mit denen ich 'ran gehe, und die Probleme, die ich lösen möchte. Dort wird es vermutlich auch immer Weiterentwicklung geben und brauchen.
da sind wir absolut einer Meinung! Ich sehe hier einige Schwachpunkte. Es gibt ja auch durchaus Gründe für den einen oder anderen, lieber Insta oder Twitter zu benutzen als friendica, pleroma oder mastodon. Es hängt hier vom Goodwill einzelner Personen ab, was geduldet wird, oder nicht. Bei Twitter und Facebook sind es wahrscheinlich Algorithmen, die Inhalte sperren. Es kann dir hier passieren, dass du der Person unsympathisch bist, die hier moderiert.
Is this person even real? If so, is this their account or has it been created by someone else or a bot (without their consent)? If so is there content feed being created by a chatBOT or not? If not then are all the replies to this thread created by content bots or not? If the content is created by comment bots what is the bot maker's intent?
Question 1 may be verifiable. Question 2 maybe not. Question 3 they may actively lie to you. Question 4 who knows. Question 5 falls into the realm of speculation of a psychological manipulator.
Given that I've replied to this thread, does that mean that every chatbot in the thread now sees that I'm a dupe that took the bait? Does this mean that I'm added to the "mark" category and more such stupidity will be in the global feed. View my comment here (libranet.de/display/0b6b25a8-1…) and view all the rest of the comments in the thread (mastodon.social/@Sheril/109667…). Is no one responding because I'm being a dick, or are all of these comments one line hot takes from bots attempting to upvote someone on PBS who wants people to watch her show and get her post and her pretty face in as many global feeds as possible?
Grandma sitting at home thinks this woman is popular on the internet and will watch her show. Idiots with guns will think that their chosen shiboleth is popular on the internet with "cool" reactionary fascists. Everyone else will tune out and realize the game is rigged - that's not good when you're all out here trying to "remake" the internet. The machine the stupid monkeys that discovered fire decided to hook their entire culture to.
And this is an easy example. Shit, for all I know this woman's trying to prove a point by being this transparent. I've speculated above that the way that bots interact with posts downstream from me can manipulate the social graph of k-means clustered social interactions, but I'd have to dive into the Activity Pub protocol to see how they're doing this or the number of bots required.
This isn't a "pretend" problem. This is a "there is no way to tell information from disinformation" problem. The Kremlin no longer has to pay some group of underpaid interns in Latvia to pretend to be angry Americans, now you can automate the invective.
This is serious. People act in real life based on what they read and how they interact with the internet. Many people change their behaviors just slightly enough that in the aggregate it can sway elections, cause products to be bought, or wars to be fought. If you think you're immune, boy, have I got news for you.
If you're bots, I don't particularly care. I'm writing this in large part as a record of rhetoric that I can use for myself later, much like an interactive diary. What you're building has no safeguards for how *people* work and it's going to cause a total shitstorm when it breaks.
So first - I wrote a little on this libranet.de/display/0b6b25a8-1… In essence, in order for the internet to work there must be a way to pass messages remotely, there must be a way to confirm that the person on the other end of the message is a human being, and this way must be robust (that is, we must know believe that there isn't a chat bot in the future that would become sufficiently powerful that our verification method is broken).
This goes beyond SHA encryption. Public/private key encryption only works if we believe that two people can share a private key - that is, that I trust that the sender is human.
I think that there is a secondary problem, in that the way that Mastadon is created is such that
So first - I wrote a little on this libranet.de/display/0b6b25a8-1… In essence, in order for the internet to work there must be a way to pass messages remotely, there must be a way to confirm that the person on the other end of the message is a human being, and this way must be robust (that is, we must know believe that there isn't a chat bot in the future that would become sufficiently powerful that our verification method is broken).
This goes beyond SHA encryption. Public/private key encryption only works if we believe that two people can share a private key - that is, that I trust that the sender is human.
I think that there is a secondary problem, in that the way that Mastadon is created is such that this problem is magnified. Not only do I have to be able to trust that you are human, but I have to trust that the people that you trust, that could be on other servers that have different following regimes that I may not understand, are also human. And I don't trust that. In fact, I believe (from experience) that the entire Pleroma following regime is incredibly bad. So if you have friends, who have friends, who are on Pleroma I now have to wonder how my global feed will be affected if I connect with you, if I have any interest in using the global feed whatsoever.
As for the first problem, I don't see a solution. I think that the only way that anyone will ever be able to ensure that two parties are human beings will be meeting in person and physically exchanging private keys. Everything else has now been compromised.
Also here's a fun one I found - suppose you want to jerk off to porn you find on the "fediverse". But you don't want the people you follow to be able to see that you jerk off to porn (or which porn). Enter mastinator.com/! Now you can anonymously follow anyone, even if they've previously blocked you. And because the "fediverse" is accept all/block offending in it's connection policy regime, as soon as mastinator.com/ is blocked mastinator2.com/ will be made! Which opens a new can of worms - if Activity Pub takes over the internet will that be the end of viewing the internet anonymously? If I can only see content from people I follow and vice versa, how am I able to view subversive content? In other words, how am I able to connect to Grandma, my employer, my friends, and my lover/relationship without them all interacting without having half a dozen accounts. And if I *do* have half a dozen accounts how would anyone believe that anything anyone has to say is sincere or meaningful? It's all of the downsides of a panopticon with none of the benefits of anonymity.
@Peter Weyand I have a mixed bag of thoughts about that. Not sure where to start. First off: Yes, you're right about making sure there's a human on the other end of the wire, unless you have some personal interaction to somehow validate that. In a way, that's what @Threema does with its different stages of contact labeling where you know a contact will be "green" only and only if you met that person and scanned a QR code on his/her device (and vice versa). At this point, you can use some sort of certificate that's robust enough to ensure this validation doesn't "break", and you can use reasonably robust encryption to make sure no one else except for that person can read messages if you desire ... for exactly as long as this person keeps control over her/his device. Take the device is stolen or (on a device level) taken over by a hostile party, this chain is broken a
... mehr anzeigen
@Peter Weyand I have a mixed bag of thoughts about that. Not sure where to start. First off: Yes, you're right about making sure there's a human on the other end of the wire, unless you have some personal interaction to somehow validate that. In a way, that's what @Threema does with its different stages of contact labeling where you know a contact will be "green" only and only if you met that person and scanned a QR code on his/her device (and vice versa). At this point, you can use some sort of certificate that's robust enough to ensure this validation doesn't "break", and you can use reasonably robust encryption to make sure no one else except for that person can read messages if you desire ... for exactly as long as this person keeps control over her/his device. Take the device is stolen or (on a device level) taken over by a hostile party, this chain is broken again. And again, I ... see this not as something unique to activitypub or the fediverse but something that is true for every kind of medium, including written letters, phone calls or whatever you could come up with. Don't necessarily think this is bad - it's just "how it is". It's like, if you get involved into a chat on some particular topic in a park or a public place with some person, you also will have a rather hard time verifying his/her identity, though you know at least (s)he's a human at this point. Every medium can and eventually will be compromised, given technology is sophisticated enough to do so. Maybe it's important to have that in mind when communicating online, no matter where and how? As for that other part, that idea of anonymously viewing content, that somehow feels funny because it seems to completely oppose that other idea - if I want to have a system very keen on accessing content anonymously, isn't it the very idea to leave as little clues as possible about who a particular actor is? For that example you described, this is one of the arguments quite some of the fediverse propoments come up with when talking about different instances dedicated to particular subtopics, social communities, ... : Of course you don't want to have just one account in there everything is tied to. You want several accounts for things you want to keep separated, much alike most people I know (at least in tech) do have at least two Twitter accounts - one for business and professional stuff, one for personal stuff, and they are _very_ careful not to have any links between them. And I think the fediverse actually is rather good here - it's pretty easy to have accounts that are as anonymous as they probably could be. So most likely you won't block people just to make sure they don't see which porn you're looking at - why even bother if you can be reasonably sure no one else out there knows who "you" in this particular case are? If you do that, however, you end up with your initial problem again - you have no idea who hides behind a particular account, whether it's the person it claims to be, or whether it's a person at all. That's a bit unsatifying, but then again.... hasn't it always been this way? In usenet? In IRC? On mailing lists? Maybe the "solution" here is to, for situation in which it matters, choose tools that do the job as good as they can - and in others, know as much as possible about the shortcomings of a particular tool to still act in a responsible and safe manner? I don't have any ideas that improve that without making it much worse at the same time...
Du hast nicht auf meine Frage geantwortet, sondern auf deine Behauptung "Ich bin kein Bot". Ich habe keine Möglichkeit, dies zu wissen, und Sie haben keine Möglichkeit, dies zu bestätigen. Währenddessen zurück auf dem Mutterschiff...
@Peter Weyand (Back to English for that): How _could_ you ever know without being in touch personally? What kind of proof would be sufficient for you here? And maybe worst: To make sense, this would have to be mutual - I can't know whether or not you're a bot, either. @Georg aus Bakum
marek@kassel.social I know what a Turing Test is. My point is that we're at a point online that no reasonable human would be able to determine if a sufficiently advanced bot (combining chatGPT and stable diffusion ML) was a bot or not - the Turing Test is officially broken. Given that, there's no way to know if any information we see online is made by a collection of bots or not, with the intention of using psychological manipulation in order to influence our opinion (which is easier to do in an automated way than with PR shills). This then means that online cannot be trusted at all, which means that any sort of worldwide collective set of what might be called Truth (with a capital T) is no longer valid - if it ever were.
The Activity Pub algorithm enables this by its' design.
I think we will shortly see conflict of people attempting to alter people's perceptions of what reality through what they believe and how they believe it leading to armed conflict. We will go through a world wide period of unrest as many factions attempt to use AI in order to advance their own rh
... mehr anzeigen
marek@kassel.social I know what a Turing Test is. My point is that we're at a point online that no reasonable human would be able to determine if a sufficiently advanced bot (combining chatGPT and stable diffusion ML) was a bot or not - the Turing Test is officially broken. Given that, there's no way to know if any information we see online is made by a collection of bots or not, with the intention of using psychological manipulation in order to influence our opinion (which is easier to do in an automated way than with PR shills). This then means that online cannot be trusted at all, which means that any sort of worldwide collective set of what might be called Truth (with a capital T) is no longer valid - if it ever were.
The Activity Pub algorithm enables this by its' design.
I think we will shortly see conflict of people attempting to alter people's perceptions of what reality through what they believe and how they believe it leading to armed conflict. We will go through a world wide period of unrest as many factions attempt to use AI in order to advance their own rhetorical objectives with AIs competing for dominance. Now, truth, much like crypto, is dependent on the amount of computational power that is put behind any algorithm.
And I still don't know for certain that all of the people in this chat channel are not robots.
> The Activity Pub algorithm enables this by its' design.
As much as anything else that creates something out of nothing.
Sry but you end up talking nonsense, truth as such never has existed except of in terms of cultural interpretation demanding something to be absolutely true and imposing that supposed truth upon others.
A decentralized setup in any case gives you the chance to discover "the brick wall at the back of the theater". Centralized setups even deny you that certainty or chance.
Spencer
Als Antwort auf Kristian • • •Friendica has several feeds, including "Latest Activity", "Latest Posts", and the local and federated network feeds. As far as I know, not one is generated through any sort of "engagement" algorithm like on Big Social. The closest is Latest Activity, which is nothing mysterious, it's exactly what it says on the tin: it shows the posts that have most recently been acted upon by people you follow. I wonder if this person might be seeing different trends based on who, globally, is in a waking time zone at the time.
Given the tone and tenor of that person's other posts, they seem quick to anger and judgment. I hope they find answers to their question, and also, thank you for alerting me to someone I prefer to defederate from.
Peter Weyand
Als Antwort auf Kristian • • •Kristian
Als Antwort auf Peter Weyand • • •@Spencer @Friendica Support
VegOS
Als Antwort auf Kristian • • •axbom.com/fediverse/
fediverse.party
Peter Weyand
Als Antwort auf Kristian • • •VegOS
Als Antwort auf Peter Weyand • • •Peter Weyand
Als Antwort auf Kristian • • •@Kristian @VegOS I don't know which are bots and which aren't necessarily (chatGPT) and second, I don't know if I can see them from my feed or if friend's friend's friend's (to some nth degree) are bots.
I also believe that it is possible (and I believe that I've witnessed this happening) where if I block a bot then the server that originated the bot will detect this and in retaliation link to something else even worse. So for example, if I block a bot that comes up with memes related to climate change then my feed ends up being filled with neonazism subsequently. I don't know exactly how this occurs - it may be that malicious bots upvote certain content and downvote certain content so that when they're banned the feed just becomes polluted in another way. It may be that there are mechanisms in the offending server to message other servers to follow me and then subs
... mehr anzeigen@Kristian @VegOS I don't know which are bots and which aren't necessarily (chatGPT) and second, I don't know if I can see them from my feed or if friend's friend's friend's (to some nth degree) are bots.
I also believe that it is possible (and I believe that I've witnessed this happening) where if I block a bot then the server that originated the bot will detect this and in retaliation link to something else even worse. So for example, if I block a bot that comes up with memes related to climate change then my feed ends up being filled with neonazism subsequently. I don't know exactly how this occurs - it may be that malicious bots upvote certain content and downvote certain content so that when they're banned the feed just becomes polluted in another way. It may be that there are mechanisms in the offending server to message other servers to follow me and then subscribe bots.
I can't say for certain the mechanism that is causing this - all I can say is that I can look at my feed and I can say for certain there are wild swings in the type of content that I'm seeing from banning suspicious meme driven content. I can also say that the content that I'm seeing comes in clustered "types" that match a certain set of psychological profiles for what a user would like to see. I believe this is done in order to either promote a certain belief system in order to either sell a product or get someone to vote or act in a certain way.
I'm saying that the Activity Pub software is constructed in such a way that the entire Global Feed system is broken to the point of uselessness because of chatGPT software and automated bots. There is no way to verify for certain on here who is a human being and who isn't (and I don't think that public key cryptography works any more given chatGPT can mimic human generated text).
Is this your problem? Probably not. Activity Pub is a W3 standard now, for better or worse. I'm just saying it doesn't work as advertised and it's obvious to anyone who uses this software that it's design is able to manipulate others.
I believe the user base is comprised of three types of people -
Those who know this. These people are making the bots in order to manipulate the user base and ultimately cash in or are social media types that know how to appease the new algorithm masters and say the right memes so that they can cash in. These are people that are intentionally manipulating people (that is controlling what information they see and thereby controlling how they think and act). These people are evil. I don't know any other word for it.
There are those that know this is going on but don't want to participate, don't know how to cash in, or both. We're the outsiders in a system that we see becoming worse than Twitter. You knew who the owners were, it weren't some shadowy cabal that would only come to light when the whole thing came crashing to the ground spectacularly (and it will). Some people have few friends and don't link to the global feed. Some people are treated like dupes or have their reputation smeared by people they annoy.
Again, go to any of the sites using the Pleroma software that has more than 20 people and look to see if the site isn't almost entirely bots. Is this your problem because you are running Friendica? It is if any of your users link to that site or have followers to the nth degree that link to that site. Behold the Glory that is Activity Pub!
And then, there's everyone else who has the intelligence of a hamster and personally I couldn't tell if their shitposting were different than a robot if I tried.
Take all this for what you will. If you want to call me a nutter that's fine, but if you don't take a hard look at Pleroma or how the Activity Pub software works then it's just name calling. My Global Feed is borked and it's because this software doesn't work as advertised.
Kristian
Als Antwort auf Peter Weyand • • •@Peter Weyand As for I am concerned, I wouldn't call you a nutter but I can't ... really follow you here because I don't get your problem to be honest. I'm on Mastodon and Friendica. Most of my interactions happen either with the local timeline or my "friends" feed, in case of Friendica even more so with a group of preferred people I enjoy interacting with on a daily basis. Haven't really looked outside this scope because I never saw the need for that, except while searching for some particular specific topics (which is a thing of its own, given the Fediverse still lacks a reasonable way of searching content). So personally, a messed-up Global feed isn't really _my_ problem. Rudely speaking, unfiltered global feed has been next to unusable on most of the instances I've seen so far, and most likely a "global feed" as in "show me _everything_ there is" is unusable in every medium of sufficient size. I however see three things here:
(a) Size. One of the frequently-ann
... mehr anzeigen@Peter Weyand As for I am concerned, I wouldn't call you a nutter but I can't ... really follow you here because I don't get your problem to be honest. I'm on Mastodon and Friendica. Most of my interactions happen either with the local timeline or my "friends" feed, in case of Friendica even more so with a group of preferred people I enjoy interacting with on a daily basis. Haven't really looked outside this scope because I never saw the need for that, except while searching for some particular specific topics (which is a thing of its own, given the Fediverse still lacks a reasonable way of searching content). So personally, a messed-up Global feed isn't really _my_ problem. Rudely speaking, unfiltered global feed has been next to unusable on most of the instances I've seen so far, and most likely a "global feed" as in "show me _everything_ there is" is unusable in every medium of sufficient size. I however see three things here:
(a) Size. One of the frequently-announced thing is that the Fediverse should grow around small instances, in best cases individual instances. Knowing this is difficult, this might fix your problem seen on instances > 20 people. Get yourself a small crowd of people you trust and roll your own, knowing to block or defederate servers that cause havoc or are annoying to you and yours.
(b) ChatGPT and recognizing ML generated content. Sure, I think it's more than likely that ChatGPT and similar tools could be "weaponized" in such a way. This feels threatening, but I don't see this to be something a protocol such as ActivityPub could ever prevent. Maybe you have some suggestions on how to handle that?
(c) I don't get that last part of your messages ("... it's because this software doesn't work as advertised"). What exactly do you mean by that?
@VegOS
VegOS
Als Antwort auf Kristian • • •Kristian
Als Antwort auf VegOS • • •VegOS
Als Antwort auf Kristian • • •…ᘛ⁐̤ᕐᐷ jesuisatire bitPickup
Als Antwort auf Kristian • • •@Kristian @VegOS @Peter Weyand
I absolutely can confirm Peter's claim, same over here.
Not going to discuss this, much less with some maybe bot that's set to make me loose time or energy.
The issue ends up on the instance level, big instances, no check on the users and abusers .. bad.
From that perspective @ Eugen has or might have some issues because the biggest instances are build by him. I recon that's good for several reasons, like testing and scaling, but if they become polluters, I'm going to do the same with them than I did with some chatBOT that just ran out of code when I called "him" out.
The world we live in is not Eugen's fault.
If this goes on I'll start to publicly call out sites that apparently are the worst representatives as unsave, block them and ask my respective admins and community to block them to.
Those are our tools and we'll have to work it out.
Anyone really believes "they" will allow us to happen just like that?
Kristian
Als Antwort auf …ᘛ⁐̤ᕐᐷ jesuisatire bitPickup • • •@…ᘛ⁐̤ᕐᐷ jesuisatire bitPickup
This is an interesting take that made me think very much. Maybe that's a dimension of the whole ChatGPT and ML "mess" we didn't even consider so far - does the mere existence of these tool also call for a new line of "defense" in our reasoning and communication, claiming we can't or won't try clarifying our views to each other because there's a legit assumption we're "just" discussing with a bot? That's quite scary.
That aside, well, I agree on that last part: Those are our tools and we'll have to work it out. Following Peters statement, I somehow get the feeling that there's an assumption of possibilities offered by ActivityPub which the protocol simply can't provide, and most likely both because it was never mean
... mehr anzeigen@…ᘛ⁐̤ᕐᐷ jesuisatire bitPickup
This is an interesting take that made me think very much. Maybe that's a dimension of the whole ChatGPT and ML "mess" we didn't even consider so far - does the mere existence of these tool also call for a new line of "defense" in our reasoning and communication, claiming we can't or won't try clarifying our views to each other because there's a legit assumption we're "just" discussing with a bot? That's quite scary.
That aside, well, I agree on that last part: Those are our tools and we'll have to work it out. Following Peters statement, I somehow get the feeling that there's an assumption of possibilities offered by ActivityPub which the protocol simply can't provide, and most likely both because it was never meant to do so and because these are social not technical issues and need to be addressed on a social not technical level. So far, I have yet to see a valid solution for all this "trust" issue that works without having met a human being "in person" once and exchanged some sort of "credential" that allows for re-recognizing them online as well. But hasn't it been like this ever since? Was that different back then in the age of mailing lists or news groups? It doesn't seem new, just more ... difficult with the arise of ChatGPT et al...?
@VegOS
@Peter Weyand
Peter Weyand
Als Antwort auf Kristian • • •@Kristian @VegOS
What you're saying doesn't make sense to me. The promise of Activity Pub is that there should be chat rooms of less than 20 people. Ok...isn't that what a websocket is?
Activity Pub is premised on this. Client talks to server. Server sockets to several other clients. Server also connects to other servers (Activity Pub!) to distribute content, which then socket to their own clients. I'm saying this is broken, because if any of the friends that I connect to outside of my server then connects to a server that is bot infested then my global feed is filled with bots which show misinformation.
Ok, so I'll grant. If I'm on server A and some other guy is on server B I can connect to him and we can talk and I can see his posts. But I can't "follow" him, because I don't know that what he follows or those he follows to the nth degree aren't bots. S
... mehr anzeigen@Kristian @VegOS
What you're saying doesn't make sense to me. The promise of Activity Pub is that there should be chat rooms of less than 20 people. Ok...isn't that what a websocket is?
Activity Pub is premised on this. Client talks to server. Server sockets to several other clients. Server also connects to other servers (Activity Pub!) to distribute content, which then socket to their own clients. I'm saying this is broken, because if any of the friends that I connect to outside of my server then connects to a server that is bot infested then my global feed is filled with bots which show misinformation.
Ok, so I'll grant. If I'm on server A and some other guy is on server B I can connect to him and we can talk and I can see his posts. But I can't "follow" him, because I don't know that what he follows or those he follows to the nth degree aren't bots. So what? I'm supposed to use a local feed of known users and follow their stuff only? How do I discover other people's content unless I use the regular old internet or meet someone in person?
By work as advertised I mean there should be a way to use the global feed in Friendica (or Activity Pub generally). As far as playing wack-a-mole with every instance of software that comes up by adding it to a ban list, that's like asking a website owner to add every website in the world to a robots.txt file or they give their email address and cell phone number to everyone that goes to their website.
As far as how to fix this? Oh geez, I don't know that it's possible at this point. I've thought of complicated ways of trading public keys layered on top of the Activity Pub protocol, but there would have to be a way of verifying that someone wasn't a chatGPT bot and I don't think that's currently possible. I think public key encryption over the internet is broken. And it's not a matter of "I know you're not a bot and you know I'm not a bot". Activity Pub is made so "I need to know that you're not a bot and you don't follow bots and they don't follow bots and then don't..."
What Activity Pub to me is right now is a websocket client with a way of connecting to other websocket clients and a button next to it that says "Global Feed" which instead should say "Here there be Dragons".
Kristian
Als Antwort auf Peter Weyand • • •@VegOS
Hypolite Petovan
Als Antwort auf Peter Weyand • • •Kristian
Als Antwort auf Hypolite Petovan • • •@VegOS
@Peter Weyand
Peter Weyand
Als Antwort auf Kristian • • •@Hypolite Petovan
@VegOS
@Kristian
Is this person even real? If so, is this their account or has it been created by someone else or a bot (without their consent)? If so is there content feed being created by a chatBOT or not? If not then are all the replies to this thread created by content bots or not? If the content is created by comment bots what is the bot maker's intent?
Question 1 may be verifiable. Question 2 maybe not. Question 3 they may actively lie to you. Question 4 who knows. Question 5 falls into the realm of speculation
... mehr anzeigen@Hypolite Petovan
@VegOS
@Kristian
Is this person even real? If so, is this their account or has it been created by someone else or a bot (without their consent)? If so is there content feed being created by a chatBOT or not? If not then are all the replies to this thread created by content bots or not? If the content is created by comment bots what is the bot maker's intent?
Question 1 may be verifiable. Question 2 maybe not. Question 3 they may actively lie to you. Question 4 who knows. Question 5 falls into the realm of speculation of a psychological manipulator.
Given that I've replied to this thread, does that mean that every chatbot in the thread now sees that I'm a dupe that took the bait? Does this mean that I'm added to the "mark" category and more such stupidity will be in the global feed. View my comment here (libranet.de/display/0b6b25a8-1…) and view all the rest of the comments in the thread (mastodon.social/@Sheril/109667…). Is no one responding because I'm being a dick, or are all of these comments one line hot takes from bots attempting to upvote someone on PBS who wants people to watch her show and get her post and her pretty face in as many global feeds as possible?
Grandma sitting at home thinks this woman is popular on the internet and will watch her show. Idiots with guns will think that their chosen shiboleth is popular on the internet with "cool" reactionary fascists. Everyone else will tune out and realize the game is rigged - that's not good when you're all out here trying to "remake" the internet. The machine the stupid monkeys that discovered fire decided to hook their entire culture to.
And this is an easy example. Shit, for all I know this woman's trying to prove a point by being this transparent. I've speculated above that the way that bots interact with posts downstream from me can manipulate the social graph of k-means clustered social interactions, but I'd have to dive into the Activity Pub protocol to see how they're doing this or the number of bots required.
This isn't a "pretend" problem. This is a "there is no way to tell information from disinformation" problem. The Kremlin no longer has to pay some group of underpaid interns in Latvia to pretend to be angry Americans, now you can automate the invective.
This is serious. People act in real life based on what they read and how they interact with the internet. Many people change their behaviors just slightly enough that in the aggregate it can sway elections, cause products to be bought, or wars to be fought. If you think you're immune, boy, have I got news for you.
If you're bots, I don't particularly care. I'm writing this in large part as a record of rhetoric that I can use for myself later, much like an interactive diary. What you're building has no safeguards for how *people* work and it's going to cause a total shitstorm when it breaks.
Kristian
Als Antwort auf Peter Weyand • • •@Peter Weyand Hmmmm. There are valid questions in there, but I wonder whether you can name any kind of medium available to a large audience that has solved these? TV? Printed newspapers? Any other digital format? And again, what would be your suggestions to fix that, on any particular level? What means do you see to make a system prone to intentional abuse in such a way? The best and maybe only approach I see, at the moment, is trying to educate people to think twice, question things, try to figure out - and at some point be aware that these kinds of abuse are possible, always, just looking at wartime propaganda and manipulation in "old-fashioned" media channels. That's tough, but I really don't have any better idea.
@Hypolite Petovan @VegOS @Friendica Support @Peter Weyand
Peter Weyand
Als Antwort auf Kristian • • •@Kristian @Hypolite Petovan @VegOS @Friendica Support
In a media empire there is an editor and someone who is ultimately responsible for running the business. The buck stops at someone. There are W2s so you know which reporter is being paid to write what story.
This is an entirely other beast. This will end badly.
Kristian
Als Antwort auf Peter Weyand • • •@Peter Weyand So trust is all about money then? Would you trust the "owner" responsible for running the business? Why should they lie to you any more or any less than anyone else out there? And worse of all: Maybe they're not even lying or trying to manipulate but just are totally _convinced_ of a certain world view which should be rejected by all means? And we're really talking _mass media_ here, media even reaching people who don't have access to computers or the internet. That seems a gross oversimplification of things, and at least to me it seems to boil down to the same question: Who do you trust and why - and how to detect manipulation in reasonably complex media structures. I don't have a solution, but I don't see any on your side, either. But, as this is where we started, I still fail to see what specifically on level of a technical protocol could be done to fix this.
@Hypolite Petovan
@VegOS
…ᘛ⁐̤ᕐᐷ jesuisatire bitPickup
Als Antwort auf Kristian • • •@Kristian @Hypolite Petovan @VegOS @Peter Weyand
Although the description of the problem itself is true, and the level of language quite outstanding, we are just facing same problems, same shit, in a different way.
For example, algos to feed us what we want to see (or the contrary) are normal on the other sites. So that "lady" normally just sell's you some ad's or lure you to sex.com. AI as in the means of understanding and semantics interpretation is a real threat and danger to society, I guess much worse in wallet gardens as out here where we are, where we might uncover them or evade them easier.
btw
friendica has also a positive server list as restriction too, that's most likely one of the features to have in mind also.
Peter Weyand
Als Antwort auf Kristian • • •@Kristian @Hypolite Petovan @VegOS
It's always about the money.
Sad, but true.
People aren't making these botnets for no reason other than the lolz - OK maybe some are. What the majority of them are doing is they are attempting to convert attention into money, by cornering the market.
You don't know who they are. With a newspaper you do know who they are. That's the difference.
Kristian
Unbekannter Ursprungsbeitrag • • •Geo Rg
Als Antwort auf Kristian • • •Kristian
Als Antwort auf Geo Rg • • •@Georg aus Bakum Das ist tückisch, weil die Verwendung des Wortes "Algorithmus" in diesem Kontext meistens problematisch bzw. missverständlich ist. Im "Allgemeinen" meint man hier, wenn man "keine Algorithmen" sagt, das Fehlen der Logik bei Twitter, Facebook, Instagram, ..., die Beiträge abhängig von der Menge der Interaktionen oder anderer obskurer Metriken nach oben spült und andere versteckt. Das Fediverse hat _selbstverständlich_ auch Algorithmen, weil Computer ohne Algorithmen nicht arbeiten können. Die Timeline ist chronologisch sortiert - Sortierung ist die erste grundlegende Aufgabe, anhand derer Informatik-Studenten gemeinhin lernen, was Algorithmen überhaupt sind. Chronologische Sortierung _ist_ ein Algorithmus. Der Art und Weise, wie Systeme miteinander förderieren, liegt ein Algorithmus zugrunde, siehe etwa hier:
... mehr anzeigen@Georg aus Bakum Das ist tückisch, weil die Verwendung des Wortes "Algorithmus" in diesem Kontext meistens problematisch bzw. missverständlich ist. Im "Allgemeinen" meint man hier, wenn man "keine Algorithmen" sagt, das Fehlen der Logik bei Twitter, Facebook, Instagram, ..., die Beiträge abhängig von der Menge der Interaktionen oder anderer obskurer Metriken nach oben spült und andere versteckt. Das Fediverse hat _selbstverständlich_ auch Algorithmen, weil Computer ohne Algorithmen nicht arbeiten können. Die Timeline ist chronologisch sortiert - Sortierung ist die erste grundlegende Aufgabe, anhand derer Informatik-Studenten gemeinhin lernen, was Algorithmen überhaupt sind. Chronologische Sortierung _ist_ ein Algorithmus. Der Art und Weise, wie Systeme miteinander förderieren, liegt ein Algorithmus zugrunde, siehe etwa hier:
jlottosen.wordpress.com/2022/1…
jlottosen.files.wordpress.com/…
Das sind Algorithmen, die haben Randbedingungen und Eigenschaften, und wenn ich die Eigenschaften kenne, kann ich manipulieren, indem ich mit diesen Eigenschaften arbeite. Beispielskonstrukt: Ich habe eine chronologische Timeline, also werden Posts nach Datum absteigend einsortiert. So eine Timeline kann ich potentiell "fluten", indem ich in loser Folge extrem viel poste - dann sehen Leute, die mich in ihrer Home-Timeline oder meine Instanz in ihrer förderierten Timeline haben, quasi nur das, was ich will, oder müssen extrem viel scrollen (Algorithmen bei Twitter, Instagram, Flickr ... unterbinden das ganz gut). Oder: Ich erzeuge eine Instanz, auf der nur Unsinn gepostet wird (etwa von Bots, die in hoher Frequenz viel Content generieren). Wenn ich dort einen plausiblen Nutzer bekomme, dem jemand anders auf einer anderen Instanz folgt (oder mehrere Nutzer auf anderen Instanzen erzeuge, die meinem Nutzer auf meiner bösen Instanz folgen), dann wird die förderierte Timeline der anderen Instanz auch diesen ganzen Kram anzeigen. Das ist vereinfacht abstrakt formuliert; ich hab keine Idee, ob und wie die gängigen Implementationen (Mastodon, Friendica, ...) das technisch irgendwie unterbinden.
Lösungen wären etwa: Suspekten Nutzern oder Nutzern von suspekten Instanzen nicht folgen. Oder diese Instanz stummschalten / blocken. Oder erkennbare Accounts, die zu viel posten, stummschalten. Oder ...? Herausforderung dabei: Erkennt man, ob der Content, der gepostet wird, "Unfug" ist oder nicht? Ich glaube, Kernprobleme sind hier grundsätzliche Erwartungshaltung an die zugrundeliegende Technologie und Systeme sowie schwierige Vorstellung vom Begriff "Algorithmus". In den Beiträgen bzw Fragen hier scheinen mir sehr viele Dinge durcheinanderzugehen, die irgendwie miteinander interagieren, aber potentiell verschiedene Probleme sind.
🕊️ Das Leben ist schön
Als Antwort auf Kristian • • •@MrGR Welche Lösungsansätze gibt es?
Das Blocken von Instanzen ist ein schlechtes Mittel. Das Blocken und/oder Stummschalten einzelner Accounts wird schnell zur Mammutaufgabe bei steigenden Nutzerzahlen, so handhabe ich das aber derzeit...
Kristian
Als Antwort auf 🕊️ Das Leben ist schön • • •@Das Leben ist schön Grundsätzlich wird es dort bessere Lösungen als bisher brauchen. Das Problem ist auch nicht gänzlich neu, siehe etwa pod.geraspora.de/posts/1235904… , das ist zwei Jahre alt (und das Problem kennen wir ja auch von E-Mail, auch wenn das vielleicht leicht abweicht). Ich vermute mal, es ... läuft tatsächlich viel auf die Frage hinaus, wie man mit dem System arbeitet. Das meinte ich oben in dem Thread - ich nutze die förderierte Timeline selten bis nie, habe Gruppen (auf Friendica) bzw. Listen (auf Mastodon) für "wichtige" Nutzer, deren Content und Themen mich interessieren, sortiere gelegentlich Accounts, denen ich folge, aus. Vielleicht löst sich einiges darüber ja über das Bewusstsein, dass die globale Timeline eben "nur" automatisch förderiert und per se damit unsortiert und unmoderie
... mehr anzeigen@Das Leben ist schön Grundsätzlich wird es dort bessere Lösungen als bisher brauchen. Das Problem ist auch nicht gänzlich neu, siehe etwa pod.geraspora.de/posts/1235904… , das ist zwei Jahre alt (und das Problem kennen wir ja auch von E-Mail, auch wenn das vielleicht leicht abweicht). Ich vermute mal, es ... läuft tatsächlich viel auf die Frage hinaus, wie man mit dem System arbeitet. Das meinte ich oben in dem Thread - ich nutze die förderierte Timeline selten bis nie, habe Gruppen (auf Friendica) bzw. Listen (auf Mastodon) für "wichtige" Nutzer, deren Content und Themen mich interessieren, sortiere gelegentlich Accounts, denen ich folge, aus. Vielleicht löst sich einiges darüber ja über das Bewusstsein, dass die globale Timeline eben "nur" automatisch förderiert und per se damit unsortiert und unmoderiert ist. Bleibt noch der Umstand, dass durch Förderation mit potentiell "bösen" Instanzen Netzwerklast und Bandbreite entsteht. Das ist ärgerlich, und das braucht perspektivisch vielleicht auch noch mehr Lösungen analog der Spam- und Junk-Filterung bei E-Mail (Blacklisting, Greylisting, Austausch von Informationen über "garstige" Instanzen, ...). Oder eben das Unterbinden der Förderation der gesamten öffentlichen Timeline von Instanzen, sondern nur Förderation der Posts von Nutzern von dort, denen jemand bei mir lokal folgt. Weiß aber nicht, ob das (schon/technisch überhaupt) geht.
@Georg aus Bakum
Geo Rg
Als Antwort auf Kristian • • •Peter Weyand
Als Antwort auf Kristian • • •@Kristian @Georg aus Bakum @Das Leben ist schön
Danke Kristian, du hast das Problem verstanden. Ich sage nicht, dass Friendica oder Mastadon einen Algorithmus verwenden. Da Activity Pub es mehreren Servern ermöglicht, sich zu verbinden, können Botnetze stattdessen die Art und Weise nutzen, wie sich Server verbinden, um Benutzer zu manipulieren. Es sind die Botnets (und diese Bots), die ihre eigenen internen Algorithmen zur Manipulation des Netzwerks haben. Das ist in vielerlei Hinsicht *schlimmer*, denn statt einer zentralen Instanz, die bekannt ist und kritisiert werden kann, gibt es jetzt mehrere Schatteninstanzen, die den Informationsfluss kontrollieren, die unbekannt sind. Offensichtlich ist Geld im Spiel. Ich weiß nicht, was die L
... mehr anzeigen@Kristian @Georg aus Bakum @Das Leben ist schön
Danke Kristian, du hast das Problem verstanden. Ich sage nicht, dass Friendica oder Mastadon einen Algorithmus verwenden. Da Activity Pub es mehreren Servern ermöglicht, sich zu verbinden, können Botnetze stattdessen die Art und Weise nutzen, wie sich Server verbinden, um Benutzer zu manipulieren. Es sind die Botnets (und diese Bots), die ihre eigenen internen Algorithmen zur Manipulation des Netzwerks haben. Das ist in vielerlei Hinsicht *schlimmer*, denn statt einer zentralen Instanz, die bekannt ist und kritisiert werden kann, gibt es jetzt mehrere Schatteninstanzen, die den Informationsfluss kontrollieren, die unbekannt sind. Offensichtlich ist Geld im Spiel. Ich weiß nicht, was die Lösung ist (oder ob es eine gibt), und weil Activity Pub so beliebt wird, mache ich mir Sorgen, dass diese Bots das Verhalten der Menschen manipulieren können. Aber das *ist* ein Problem. Vielen Dank, dass Sie die Erklärung auf Deutsch an Ihre deutschen Kollegen geschrieben haben - Sie haben meinen Punkt genau erklärt.
Geo Rg
Als Antwort auf Peter Weyand • • •Kristian
Als Antwort auf Geo Rg • • •@Georg aus Bakum Ich teile die letzte Einschätzung auch nicht, glaube aber, dass es eben eine rationale, nüchterne Sicht auf die Dinge gibt. Das Fediverse ist kein "silver bullet", ActivityPub auch nicht. Zentrale Systeme und verteilte Infrastrukturen haben aus vielen Perspektiven heraus sehr unterschiedliche Angriffsflächen und denkbare Bedrohungsszenarien. Das jetzige Fediverse versucht eine Reihe der Probleme zu korrigieren, die wir an den zentralen Systemen gefunden haben - vor allem etwa den Umstand, dass es eine einzelne, nur mäßig transparent arbeitende Struktur gibt, von deren Goodwill und Entscheidungen abhängt, wer kommunizieren kann und wer nicht, wer geblockt wird und wer nicht, ... . Das löst ein dezentrales System, indem es diese zentrale Instanz aufbricht, diese Entscheidungen auf viele kleine Schultern verteilt und beispielsweise einzelnen Communities die Möglichkeit einräumt, sich ihre eigenen Spaces mit eigenen
... mehr anzeigen@Georg aus Bakum Ich teile die letzte Einschätzung auch nicht, glaube aber, dass es eben eine rationale, nüchterne Sicht auf die Dinge gibt. Das Fediverse ist kein "silver bullet", ActivityPub auch nicht. Zentrale Systeme und verteilte Infrastrukturen haben aus vielen Perspektiven heraus sehr unterschiedliche Angriffsflächen und denkbare Bedrohungsszenarien. Das jetzige Fediverse versucht eine Reihe der Probleme zu korrigieren, die wir an den zentralen Systemen gefunden haben - vor allem etwa den Umstand, dass es eine einzelne, nur mäßig transparent arbeitende Struktur gibt, von deren Goodwill und Entscheidungen abhängt, wer kommunizieren kann und wer nicht, wer geblockt wird und wer nicht, ... . Das löst ein dezentrales System, indem es diese zentrale Instanz aufbricht, diese Entscheidungen auf viele kleine Schultern verteilt und beispielsweise einzelnen Communities die Möglichkeit einräumt, sich ihre eigenen Spaces mit eigenen Regeln zu schaffen, die sie vor niemandem begründen müssen. Sofortiger Nachteil dieser Übung: Wenn beispielsweise eine große Instanz wie chaos.social entscheidet, eine andere große Instanz wie mastodonten.de (wie vor einigen Jahren passiert) zu blocken, macht das für sehr viele Nutzer kurzfristig die Kommunikation kaputt. Ich glaube nicht, dass das Fediverse mehr oder weniger gefährlich ist - aber ich denke, wir tun gut daran, auch kritisch zu diskutieren und kritische Hinweise konstruktiv zu behandeln. Man sollte auch nicht vergessen: Am Ende des Tages war das Fediverse vor kurzem im Vergleich zu Plattformen wie Twitter oder Facebook eher ... klein. Mit der Größe kommen ganz sicher neue Herausforderungen.
Interessant etwa auch: jwz.org/blog/2022/11/mastodon-… : Wenn ich einen Link auf eine Website ins Fediverse poste, werden die Instanzen, die diesen Link sehen, den Server, auf den verlinkt wird, kontaktieren. Konsequenz: Habe ich einen Link, der in großem Umfang förderiert wird, greifen plötzlich potentiell ein paar tausend Instanzen völlig unkoordiniert auf den Server zu, auf den der Link zeigt. Wenn der Link auf einen Webserver auf einem RaspPi hinter dünner Leitung oder ein System wie solar.lowtechmagazine.com/ zeigt, kann das schon echt Stress bedeuten - oder umgekehrt die Anforderungen an Server- und Netzwerk-Ressourcen erhöhen, was wir aus verschiedenen Gründen (Stichwort mindestens Energiesparen) eigentlich nicht wollen.
Oder aber (hab den Link grad nicht mehr zur Hand) die Bitte eines Mastodon-Nutzers, große Videos bitte nicht über Mastodon zu teilen, weil das zur Konsequenz hat, dass die Videos (wie auch Bilder) vielfach zwischen Instanzen übertragen, lokal gespeichert werden und damit die Systeme voll-laufen lassen.
Das beides sind schon Dinge, die weder überraschen noch erschrecken, die aus meiner Sicht auch nicht grundsätzlich schlimm oder unbeherrschbar sind - aber eben Themen, an denen "zentrale" Systeme gewisse Vorteile haben können und mit denen sich die Fediverse-Community vermutlich irgendwann irgendwie beschäftigen will. Und die Frage, inwieweit Botnetze irgendwelchen Stuss tun können, indem sie die Systeme mit Content fluten, den man nicht will und maschinell schlecht filtern kann, wird möglicherweise auch dazugehören - siehe etwa auch die Qualität, die manche Spam- und Phishing-E-Mails dieser Tage haben und die so "gut" sind, dass diese Nachrichten selbst durch gute automatische Filter rutschen und Menschen Geld an Unbekannte überweisen.
@Das Leben ist schön @Peter Weyand
Geo Rg
Als Antwort auf Kristian • • •Geo Rg
Als Antwort auf Geo Rg • • •Geo Rg
Als Antwort auf Geo Rg • • •Geo Rg
Als Antwort auf Geo Rg • • •Kristian
Als Antwort auf Geo Rg • • •Geo Rg
Als Antwort auf Kristian • • •Kristian
Als Antwort auf Geo Rg • • •Geo Rg
Als Antwort auf Kristian • • •Peter Weyand
Als Antwort auf Kristian • • •@Georg aus Bakum
libranet.de/display/0b6b25a8-1…
Können Sie auf diesen Beitrag antworten? Ich kann nicht sagen, ob dies ein Bot-generierter Beitrag ist oder nicht.
Peter Weyand
@Georg aus Bakum
libranet.de/display/0b6b25a8-1…
Können Sie auf diesen Beitrag antworten? Ich kann nicht sagen, ob dies ein Bot-generierter Beitrag ist oder nicht.
Peter Weyand
2023-01-11 17:48:34
Geo Rg
Als Antwort auf Peter Weyand • • •Peter Weyand
Als Antwort auf Kristian • • •@Kristian
@Georg aus Bakum
So first - I wrote a little on this libranet.de/display/0b6b25a8-1… In essence, in order for the internet to work there must be a way to pass messages remotely, there must be a way to confirm that the person on the other end of the message is a human being, and this way must be robust (that is, we must know believe that there isn't a chat bot in the future that would become sufficiently powerful that our verification method is broken).
This goes beyond SHA encryption. Public/private key encryption only works if we believe that two people can share a private key - that is, that I trust that the sender is human.
I think that there is a secondary problem, in that the way that Mastadon is created is such that
... mehr anzeigen@Kristian
@Georg aus Bakum
So first - I wrote a little on this libranet.de/display/0b6b25a8-1… In essence, in order for the internet to work there must be a way to pass messages remotely, there must be a way to confirm that the person on the other end of the message is a human being, and this way must be robust (that is, we must know believe that there isn't a chat bot in the future that would become sufficiently powerful that our verification method is broken).
This goes beyond SHA encryption. Public/private key encryption only works if we believe that two people can share a private key - that is, that I trust that the sender is human.
I think that there is a secondary problem, in that the way that Mastadon is created is such that this problem is magnified. Not only do I have to be able to trust that you are human, but I have to trust that the people that you trust, that could be on other servers that have different following regimes that I may not understand, are also human. And I don't trust that. In fact, I believe (from experience) that the entire Pleroma following regime is incredibly bad. So if you have friends, who have friends, who are on Pleroma I now have to wonder how my global feed will be affected if I connect with you, if I have any interest in using the global feed whatsoever.
As for the first problem, I don't see a solution. I think that the only way that anyone will ever be able to ensure that two parties are human beings will be meeting in person and physically exchanging private keys. Everything else has now been compromised.
Also here's a fun one I found - suppose you want to jerk off to porn you find on the "fediverse". But you don't want the people you follow to be able to see that you jerk off to porn (or which porn). Enter mastinator.com/! Now you can anonymously follow anyone, even if they've previously blocked you. And because the "fediverse" is accept all/block offending in it's connection policy regime, as soon as mastinator.com/ is blocked mastinator2.com/ will be made! Which opens a new can of worms - if Activity Pub takes over the internet will that be the end of viewing the internet anonymously? If I can only see content from people I follow and vice versa, how am I able to view subversive content? In other words, how am I able to connect to Grandma, my employer, my friends, and my lover/relationship without them all interacting without having half a dozen accounts. And if I *do* have half a dozen accounts how would anyone believe that anything anyone has to say is sincere or meaningful? It's all of the downsides of a panopticon with none of the benefits of anonymity.
Mastinator
mastinator.comKristian
Als Antwort auf Peter Weyand • • •@Peter Weyand I have a mixed bag of thoughts about that. Not sure where to start. First off: Yes, you're right about making sure there's a human on the other end of the wire, unless you have some personal interaction to somehow validate that. In a way, that's what @Threema does with its different stages of contact labeling where you know a contact will be "green" only and only if you met that person and scanned a QR code on his/her device (and vice versa). At this point, you can use some sort of certificate that's robust enough to ensure this validation doesn't "break", and you can use reasonably robust encryption to make sure no one else except for that person can read messages if you desire ... for exactly as long as this person keeps control over her/his device. Take the device is stolen or (on a device level) taken over by a hostile party, this chain is broken a
... mehr anzeigen@Peter Weyand I have a mixed bag of thoughts about that. Not sure where to start. First off: Yes, you're right about making sure there's a human on the other end of the wire, unless you have some personal interaction to somehow validate that. In a way, that's what @Threema does with its different stages of contact labeling where you know a contact will be "green" only and only if you met that person and scanned a QR code on his/her device (and vice versa). At this point, you can use some sort of certificate that's robust enough to ensure this validation doesn't "break", and you can use reasonably robust encryption to make sure no one else except for that person can read messages if you desire ... for exactly as long as this person keeps control over her/his device. Take the device is stolen or (on a device level) taken over by a hostile party, this chain is broken again. And again, I ... see this not as something unique to activitypub or the fediverse but something that is true for every kind of medium, including written letters, phone calls or whatever you could come up with. Don't necessarily think this is bad - it's just "how it is". It's like, if you get involved into a chat on some particular topic in a park or a public place with some person, you also will have a rather hard time verifying his/her identity, though you know at least (s)he's a human at this point. Every medium can and eventually will be compromised, given technology is sophisticated enough to do so. Maybe it's important to have that in mind when communicating online, no matter where and how?
As for that other part, that idea of anonymously viewing content, that somehow feels funny because it seems to completely oppose that other idea - if I want to have a system very keen on accessing content anonymously, isn't it the very idea to leave as little clues as possible about who a particular actor is? For that example you described, this is one of the arguments quite some of the fediverse propoments come up with when talking about different instances dedicated to particular subtopics, social communities, ... : Of course you don't want to have just one account in there everything is tied to. You want several accounts for things you want to keep separated, much alike most people I know (at least in tech) do have at least two Twitter accounts - one for business and professional stuff, one for personal stuff, and they are _very_ careful not to have any links between them. And I think the fediverse actually is rather good here - it's pretty easy to have accounts that are as anonymous as they probably could be. So most likely you won't block people just to make sure they don't see which porn you're looking at - why even bother if you can be reasonably sure no one else out there knows who "you" in this particular case are? If you do that, however, you end up with your initial problem again - you have no idea who hides behind a particular account, whether it's the person it claims to be, or whether it's a person at all. That's a bit unsatifying, but then again.... hasn't it always been this way? In usenet? In IRC? On mailing lists? Maybe the "solution" here is to, for situation in which it matters, choose tools that do the job as good as they can - and in others, know as much as possible about the shortcomings of a particular tool to still act in a responsible and safe manner? I don't have any ideas that improve that without making it much worse at the same time...
@Georg aus Bakum
Peter Weyand
Als Antwort auf Kristian • • •@Georg aus Bakum
Du hast nicht auf meine Frage geantwortet, sondern auf deine Behauptung "Ich bin kein Bot". Ich habe keine Möglichkeit, dies zu wissen, und Sie haben keine Möglichkeit, dies zu bestätigen. Währenddessen zurück auf dem Mutterschiff...
news.ycombinator.com/item?id=3…
blog.elevenlabs.io/enter-the-n…
Geo Rg
Als Antwort auf Peter Weyand • • •Kristian
Als Antwort auf Peter Weyand • • •@Georg aus Bakum
Der Marek
Als Antwort auf Kristian • •en.wikipedia.org/wiki/Turing_t…
Friendica Support hat dies geteilt.
Peter Weyand
Als Antwort auf Kristian • • •marek@kassel.social I know what a Turing Test is. My point is that we're at a point online that no reasonable human would be able to determine if a sufficiently advanced bot (combining chatGPT and stable diffusion ML) was a bot or not - the Turing Test is officially broken. Given that, there's no way to know if any information we see online is made by a collection of bots or not, with the intention of using psychological manipulation in order to influence our opinion (which is easier to do in an automated way than with PR shills). This then means that online cannot be trusted at all, which means that any sort of worldwide collective set of what might be called Truth (with a capital T) is no longer valid - if it ever were.
The Activity Pub algorithm enables this by its' design.
I think we will shortly see conflict of people attempting to alter people's perceptions of what reality through what they believe and how they believe it leading to armed conflict. We will go through a world wide period of unrest as many factions attempt to use AI in order to advance their own rh
... mehr anzeigenmarek@kassel.social I know what a Turing Test is. My point is that we're at a point online that no reasonable human would be able to determine if a sufficiently advanced bot (combining chatGPT and stable diffusion ML) was a bot or not - the Turing Test is officially broken. Given that, there's no way to know if any information we see online is made by a collection of bots or not, with the intention of using psychological manipulation in order to influence our opinion (which is easier to do in an automated way than with PR shills). This then means that online cannot be trusted at all, which means that any sort of worldwide collective set of what might be called Truth (with a capital T) is no longer valid - if it ever were.
The Activity Pub algorithm enables this by its' design.
I think we will shortly see conflict of people attempting to alter people's perceptions of what reality through what they believe and how they believe it leading to armed conflict. We will go through a world wide period of unrest as many factions attempt to use AI in order to advance their own rhetorical objectives with AIs competing for dominance. Now, truth, much like crypto, is dependent on the amount of computational power that is put behind any algorithm.
And I still don't know for certain that all of the people in this chat channel are not robots.
Friendica Support hat dies geteilt.
…ᘛ⁐̤ᕐᐷ jesuisatire bitPickup
Als Antwort auf Peter Weyand • • •@Peter Weyand @Kristian
> The Activity Pub algorithm enables this by its' design.
As much as anything else that creates something out of nothing.
Sry but you end up talking nonsense, truth as such never has existed except of in terms of cultural interpretation demanding something to be absolutely true and imposing that supposed truth upon others.
A decentralized setup in any case gives you the chance to discover "the brick wall at the back of the theater".
Centralized setups even deny you that certainty or chance.
---
and yes, I'm a bot too.
Kristian mag das.
Friendica Support hat dies geteilt.