Matt Mullenweg Has Lost His Way And Is Taking It Out On The Community. The Latest…
Roger Montti over at Search Engine Journal reported that the latest in the WordPress Drama had dropped. Matt Mullenweg, the “leader” of the WordPress project has barred longtime supporter of the project Joost De Valk from speaking at WordCamp Asia, because Matt feels Joost stabbed him in the back.
Matt Mullenweg, co-creator of WordPress, posted a statement on X (formerly Twitter) announcing that Joost de Valk was no longer speaking at WordCamp Asia and was persona non grata for having “stabbed” him when he was vulnerable and betraying his confidence. Mullenweg cited statements by de Valk to prove his point, but those statements were presented without their full context, which significantly altered their meaning.
Matt is seriously unhinged. And what’s scary is that WordPress is suffering because of this insanity. WordPress powers ~40% of the Websites on the Web. There is a lot at stake here if Matt goes nuclear.
He brought this on himself. And now Matt is paying the price for his actions. But sadly he’s taking the whole fucking community down with him.
There are efforts to make sure this doesn’t happen, or hopefully doesn’t. But it has a lot of us on edge, me included.
Seriously, this is one more thing that many of us don’t need right now.
#JoostDeValk #MattMullenweg #wordcamp #WordCampAsia #wordpress #WPDrama
WordPress Leader Mullenweg Silences Joost De Valk
Mullenweg bars Joost de Valk from speaking at WordCamp Asia, saying he feels Joost "stabbed" him when he was down.Roger Montti (Search Engine Journal)
Deutschlandticket: CDU liebäugelt mit Ende des Abos für den Nahverkehr
Beliebtes Nahverkehrsabo: CDU liebäugelt mit Ende des Deutschlandtickets
Mit dem 58-Euro-Ticket für Bus und Bahn könnte im Dezember Schluss sein, wenn es nach einflussreichen Politikern von CDU und CSU geht. Der überraschende Vorstoß könnte viele Pendler Tausende Euro im Jahr kosten.DER SPIEGEL
Deutschlandticket: CDU liebäugelt mit Ende des Abos für den Nahverkehr
cross-posted from: feddit.org/post/7864721
web.archive.org/web/2025021117…
Beliebtes Nahverkehrsabo: CDU liebäugelt mit Ende des Deutschlandtickets
Mit dem 58-Euro-Ticket für Bus und Bahn könnte im Dezember Schluss sein, wenn es nach einflussreichen Politikern von CDU und CSU geht. Der überraschende Vorstoß könnte viele Pendler Tausende Euro im Jahr kosten.DER SPIEGEL
2025 – 042: Denkbar
Gefühlt scheint dieser Satz weitgehend vergessen zu sein.
Wißt ihr’s noch, erinnert ihr euch daran: Nicht alles, was denkbar ist, muß gesagt oder getan oder verwirklicht werden. […]
#Anstand #Beachtung #Denken #Handeln #Menschlichkeit #Solidarität #Sprechen
deremil.blogda.ch/2025/02/11/0…
2025 – 042: Denkbar
Gefühlt scheint dieser Satz weitgehend vergessen zu sein. Wißt ihr’s noch, erinnert ihr euch daran: Nicht alles, was denkbar ist, muß gesagt oder getan oder verwirklicht werden. […]GeDACHt | Geschrieben | Erlebt | Gesehen
The Social Web Foundation announces its membership in the World Wide Web Consortium
SWF has joined the World Wide Web Consortium (W3C) to advance open standards for the social web. Evan Prodromou will be SWF’s advisory council representative as he continues in his role as maintainer of the ActivityPub and Activity Streams 2.0 specifications.
Extending and expanding the implementation of the ActivityPub is a core area of work for SWF. ActivityPub is the only open standard that can underpin a social web that is truly interoperable, fostering innovation and usability across social platforms like Mastodon, Pixelfed, Ghost, Threads, Flipboard and an increasing number of other online spaces.
The SWF and its founders have a history of engagement in the W3C. Evan was co-author on the Activity Streams 2.0 and ActivityPub specifications. As head of digital at ARTICLE 19 and later CTO of the Center for Democracy and Technology, I’ve engaged as both an invited technical expert and an advisory council member under CDT’s membership on topics related to technical standards and human rights issues.
The W3C Social Community Group is the primary vehicle for ActivityPub extensions that provides a structured process for introducing new features and updating the existing specifications. The W3C CGs are open to anyone who would like to join and contribute.
SWF joins CDT as one of the few civil society organizations that comprise the Consortium. SWF’s membership in the W3C underscores our commitment to promoting an open and federated social web, and our alignment with the W3C mission to develop web standards through community consensus that ensures the long-term growth and accessibility of technical specifications that are openly licensed.
In terms of concrete ongoing work, we look forward to bringing end-to-end encryption to direct messages in ActivityPub, developing groups on the social web, supporting data portability with ActivityPub, making discovery of ActivityPub objects and their authors easier. We’ll participate in ongoing maintenance and development of the core standards. We’ll also work alongside the renowned experts in web accessibility, building with potential companies and organizations who might become SWF supporters, and serving as a voice for the public interest in this important forum for multistakeholder internet governance.
teilten dies erneut
ralf tauscher hat dies geteilt.
Justizposse in Bayern: Anwalt bekommt Geld für 7.300 gedruckte Seiten nicht zurück
Digitalisierung ist in Bayern anscheinend nur teilweise angekommen:
Ein Pflichtverteidiger aus Bayern ist mit seiner Forderung gescheitert, sich die Druckkosten für insgesamt 7.327 gedruckte Seiten erstatten zu lassen. Insgesamt hatte der Verteidiger 1.872 Euro Druckkosten in Rechnung gestellt, da er elektronische Akten eines Prozesses ausdrucken musste.
#foto #photo #photographie #Fotografie #Natur #Nature #Kanadagans #Kanadagänse #Graugans #olympus #omsystem #om-1 #myphoto #mywork #Vögel
Graugans und Kanadagans
Ein Paar04.02.2025
#foto #photo #photographie #Fotografie #Natur #Nature #Kanadagans #Kanadagänse #Graugans #olympus #omsystem #om-1 #myphoto #mywork #Vögel
Mini-PC Aoostar R7 mit Ryzen 7 im Test
Mini-PC Aoostar R7 mit Ryzen 7 im Test
Mit zwei 3,5-Zoll-Einschüben kann der Aoostar R7 auch als NAS fungieren. Wie gut das funktioniert und was der Mini-PC mit Ryzen 7 sonst noch zu bieten hat, zeigt unser Test.Kai Schmerer (Heise online bestenlisten)
Plates London becomes first vegan restaurant to get Michelin star - BBC News
Plates London becomes first vegan restaurant to get Michelin star
Plates London on Old Street is among 22 restaurants to be awarded first Michelin stars.BBC News
✍ Petition: "Soziale Netzwerke als demokratische Kraft retten"
Petition "Soziale Netzwerke als demokratische Kraft retten"
"An: Parteichefs und Fraktionsvorsitzende der demokratischen Parteien im Bundestag, die Ministerpräsident*innen der Länder und die EU-Kommission:
Das freie Internet wird abgeschafft – es wurde von den Big-Tech-Monopolen übernommen. Die wachsende Dominanz von Plattformkonzernen wie Meta, X oder ByteDance (TikTok) für Information und Austausch führt zu einer Konzentration von Meinungsmacht, die unsere Demokratie gefährdet.
Im digitalen Raum lenken wenige vorwiegend US-amerikanische und chinesische Tech-Konzerne Information und öffentliche Debatte. Deren Plattformen erlauben keinen ungehinderten Zugang: Denn Nutzende müssen für diesen Zugang persönlichste Daten preisgeben. Gleichzeitig filtern Algorithmen intransparent, was Nutzende zu sehen bekommen und was nicht – Algorithmen, die einzig den Gesetzen der Aufmerksamkeitsökonomie folgen, befreit von Gemeinwohlorientierung und journalistischen Qualitätsansprüchen. Mit einer Flut von Hass, Häme, Hetze und Desinformation zersetzen wenige Monopolplattformen unsere Demokratien und gefährden jeden Menschen.
Unabhängige Angebote verlieren derweil auf eigenen Verbreitungswegen zunehmend ihr Publikum und ihre Finanzierungsgrundlage: Journalismus wird zum Verlustgeschäft, weil Big-Tech-Konzerne den Großteil der Werbeeinnahmen vereinnahmen. Journalistinnen und Medienunternehmen müssen sich und ihre Inhalte den Plattformen und deren Algorithmen unterordnen. Auch einzelne Kreative und weitere Akteurinnen geraten in wachsende Abhängigkeit.
Die rasante Einführung von generativer KI beschleunigt diesen Prozess: Nutzende haben kaum noch Anlass, Webseiten einer Originalquelle zu besuchen, weil KI-gestützte Suchmaschinen die Inhalte zusammenfassen – auf Basis intransparenter technischer Prozesse, die den Tenor oder Aussagen verändern, oft unter vielfachem Urheberrechtsbruch. Diese KI-Dienste sind dazu angetan, die Vormachtstellung der Plattformkonzerne zu zementieren und journalistische Medien weiter zu marginalisieren, bevor sie aussterben.
Wir sehen dringenden Handlungsbedarf für alle, für Unternehmen, Verbände, gesellschaftliche Institutionen und die Politik auf nationaler und europäischer Ebene.
Demokratiestärkende Angebote müssen ausgebaut werden, demokratieschädliche Plattformmonopole sollten ihre massiven Privilegien umgehend verlieren.
Warum ist das wichtig?
Rund 100 Akteur*innen aus Kultur, Wirtschaft und Medien haben sich zur Initiative Save Social zusammengetan.
[1]Gemeinsam schlagen sie zehn konkrete Schritte vor, um das Internet von der Dominanz der Monopolkonzerne zu befreien und alternative Plattformen für Information und Debatte zu stärken:
- Wir stärken Alternativen mit guten Inhalten: Mit öffentlichen Mitteln finanzierte Inhalte müssen vollständig zumindest auch auf diesen Plattformen verfügbar sein, denen offene und anerkannte Standards und Protokolle zu Grunde liegen. Politik, Behörden, Universitäten, Forschungseinrichtungen, Bibliotheken, aber auch der öffentlich-rechtliche Rundfunk werden verpflichtet, alle Inhalte ausnahmslos zumindest auch auf diesen Plattformen zur Verfügung zu stellen. Sie müssen eigene Angebote wie Mediatheken über Protokolle für diese Plattformen öffnen.
- Wir stärken Alternativen strukturell: Öffentliche Institutionen (Politik, Behörden, Universitäten, Bibliotheken, öffentlich-rechtlicher Rundfunk und weitere) produzieren heute mit hohem Aufwand exklusive Inhalte für Instagram, TikTok und weitere Monopolplattformen. Sie werden künftig verpflichtet, mindestens mit demselben finanziellen und strukturellen Aufwand in die Herstellung von Inhalten und deren Distribution für diese offenen Digitalplattformen zu investieren. In regelmäßigen Abständen prüfen Aufsichtsgremien, ob der Anteil des Aufwands für offenen Plattformen vergrößert werden kann, ohne die erforderliche Reichweite der Angebote zu gefährden.
- Wir investieren in die Entwicklung und Nutzbarkeit von Alternativen: Bund und Länder werden verpflichtet, ihre Investitionen in die Entwicklung und Stärkung dieser offenen Plattformen und Protokolle sowie Angebote auf Basis dieser massiv auszuweiten. Ziel ist dabei insbesondere, deren Bedienbarkeit zu verbessern, Wachstum durch ausreichende technische Infrastruktur zu erlauben und die Marktdurchdringung durch Marketing zu erhöhen. Zudem schaffen Bund und Länder Bürger*innengremien, die die Anforderungen an solche demokratiestärkenden Angebote festlegen und überwachen.
- Wir ermöglichen gemeinwohlorientierte Angebote: Für Betreiber demokratiestärkender Plattformen und Angebote wird ein Rechtsrahmen geschaffen, in dem diese gemeinnützig operieren können.
- Wir verbessern die Medienbildung: Bildungseinrichtungen, insbesondere Schulen und Träger von Medienkompetenz-Angeboten, werden verpflichtet, in erster Linie die Nutzung offener und demokratiestärkender Plattformen und Netzwerke zu vermitteln. Gleichzeitig wird die Nutzung von Hardware und Angeboten der Monopolplattformen in Bildungseinrichtungen eingeschränkt mit dem Ziel, diese möglichst ganz zu vermeiden. Zudem sollen Lehr- und Lerninhalte des staatlichen Bildungssystems auf offenen Plattformen zur Verfügung gestellt werden, sofern die Urheber*innen die nötigen Rechte eingeräumt haben.
- Wir schaffen Vielfalt und Transparenz: Für große Plattformen werden Marktanteilsobergrenzen eingeführt, bei deren Überschreitung Unternehmensteile veräußert oder Inhalt und Verbreitungsweg getrennt werden müssen. Eine Digitalsteuer für Tech-Giganten wird erhoben, um eine demokratiestärkende Informations- und Diskussionsinfrastruktur sowie Qualitätsjournalismus zu finanzieren.
- Wir öffnen Plattformen: Große Plattformen müssen offene Standards und Interoperabilität zwischen Angeboten einführen, damit Nutzende Inhalte herstellerunabhängig nutzen können und bei einem Angebotswechsel eigene Inhalte nicht verlieren. Ein solcher Angebotswechsel muss auch durch vollständige Download-Möglichkeiten eigener Inhalte erleichtert werden.
- Wir ermöglichen Sichtbarkeit: Monopolplattformen bestrafen heute Links, die auf Angebote außerhalb dieser Plattformen wie eigene Webseiten verweisen, beispielsweise durch geringere Reichweite oder weniger Sichtbarkeit. Solche Outlinks dürfen künftig in der Verbreitung von Inhalten nicht mehr zu einer Benachteiligung führen, damit Nutzende ohne Nachteile auf Angebote außerhalb der Plattformen verweisen können. Zur Überprüfung müssen große Plattformen ihre Algorithmen transparent offenlegen.
- Wir geben Communities echten Sinn: Unabhängige Aufsichtsgremien müssen die Einhaltung der oben genannten Maßnahmen überwachen mit dem Ziel, Monopolstellungen, strafbare Äußerungen und gezielte Desinformation und Wahlmanipulation einzudämmen. Die Plattformen müssen über mehrere Wege einfach erreichbare Ansprechpersonen beschäftigen, die bei Account-Sperrung, Hass oder Verleumdung schnell agieren.Wer Geld mit Inhalten verdient, muss Verantwortung übernehmen.
- Wer Geld mit Inhalten verdient, muss Verantwortung übernehmen:
Bis heute dürfen Plattformen sogar strafbare Inhalte (Rassismus, Diskriminierung, Holocaustleugnung etc.) zu Geld machen. Das Haftungsprivileg für besonders große Plattformen kommt auf den Prüfstand. So, wie Medienkonzerne Inhalte presserechtlich verantworten, müssen Plattformen für ihre Inhalte Verantwortung übernehmen und haften.
[1] Zu den Unterzeichnenden gehören unter anderem:
die Musiker*innen Jan Delay, Dota Kehr und Sebastian Krumbiegel
die Autor*innen Marc-Uwe Kling, Saša Stanišić, Nina George, Uwe Timm und Isabel Bogdan
Journalist*innen wie Dr. Eckart von Hirschhausen und Nadia Zaboura
der Unternehmer Sebastian Klein
der Tech-Blogger Sascha Pallenberg
die Gewerkschaften Deutscher Journalisten-Verband (DJV), Deutsche Journalistinnen- und Journalisten-Union in ver.di (dju), unisono (Deutsche Musik- und Orchestervereinigung e.V.) Freelens, der Berufsverband der Fotograf*innen sowie Greenpeace e. V. "
🇩🇪
Soziale Netzwerke als demokratische Kraft retten Nina George, Schriftstellerin, Ehrenpräsidentin European Writers’ Council (EWC) Foto: Heike Blenck Rocko Schamoni, Schriftsteller, Regisseur,...Save Social
I'm a solarpunk writer; here's my smallweb site with free stories and novels!
Hey solarpunks!
I'm Clockwork, an Italian solarpunk physicist, juggler and writer. I would like share my website with you; I've set it up two years ago with zero html knowledge and I have given it a new revamp recently. Since I write in English but the publishing system is extremely walled, I want to make myself known among likeminded people without selling my soul.
I write mostly solarpunk (you can check out the short story page and the Meteorina "saga" (three long stories, with a fourth one coming soon), but you can also find some Neolithic fantasy and scifi with community and resistance themes. All stories are FREE TO DOWNLOAD; there are no Amazon links and most files are interoperable and freely accessible. There is a donation button but I'd rather you enjoy these stories in a non-transactional way.
I hope you will find them interesting and possibly get inspired!
EDIT: I'm an idiot, here's the link! Thank you for pointing it out 😅
"Anno 117": Anmeldung für Beta-Test jetzt möglich
"Anno 117": Anmeldung für Beta-Test jetzt möglich
Ubisoft will "Anno 117" vor dem Release im Rahmen mehrerer Beta-Phasen testen. Wer daran Interesse hat, kann sich nun anmelden.Daniel Herbig (heise online)
Mitwirkende
Kikuchi Naoko (Koto-Zither), Kuroda Reison (Shakuhachi-Flöte), Yokokawa Tomoya (Moderation)
"Im Einklang mit dem Monatsthema des Japanischen Kulturinstituts lädt das Ensemble Doppelmond Sie zu einer musikalischen Zeitreise nach Japan ein. Die traditionellen Instrumente Koto (Wölbbrettzither) und Shakuhachi (Bambusflöte), die im 16. und 17. Jahrhundert ihre heutige Form erhielten, blicken auf eine lange und tief verwurzelte Geschichte zurück. Ihre Klänge sind eng mit der reichen musikalischen Tradition Japans verbunden, die sowohl spirituelle als auch kulturelle Elemente umfasst.
Diese Instrumente bewahren nicht nur die Tradition, sondern eröffnen auch neue kreative Horizonte. Durch die Verschmelzung dieser traditionellen Klänge mit dem innovativen Flair zeitgenössischer Komponisten möchten wir das volle Potenzial dieser einzigartigen Klangwelt in die Welt tragen.
Kikuchi Naoko, eine renommierte Koto-Spielerin aus Frankfurt, und der talentierte Shakuhachi-Spieler Kuroda Reison aus Japan werden Sie gemeinsam mit Yokokawa Tomoya, dem Leiter des Ensembles Doppelmond, auf diese musikalische Zeitreise begleiten.
In Zusammenarbeit mit Doppelmond - Sogetsusha e.v."
Info via @sumika@mstdn.social
Quelle: sumikai.com/japan-erleben/in-d…
#weeklyreview 06/2025
Again a mixed week
Took Monday off to drive a friend to an examiner for his insurance to assess that he’s not capable of working in his job. He’s got severe ME/CFS, can’t sit or stand upright without passing out in a matter of minutes, has the attention span of squirrel before his brain fog kicks in etc. The examination was originally meant to be 2 days of 4 hours examinations each. But it was clear from the start that he’d not be able to do that. The bloody examiner sit’s in a non-accessible building in the south of Berlin. My friends needs a wheelchair to move around. Of course the building neither had an elevator nor a ramp or something. So it took my friend about 25 minutes to rob in his bum up to the 3rd floor. Yeah… because he can’t stand upright and just walk. And he’s also a bit heavy so can’t be easily carried. And our brilliant health system also does not pay for transport and carrying him upstairs anymore. You’d need three persons to move him, but the default staff on official ambulance cars is only two.
Overall a rather humiliating and exhausting experience for him. I’ll just hope that he’ll finally get’s the deserved payout from his insurance.
The good thing was though, that I 4 hours to roam around in Friedenau and was able to give my good friend Boerge as visit in his new home 😀
Tuesday & Wednesday I was fighting with work bureaucrats the get an exemption for using my corporate USB Disks to free off space on my internal hard disk for experiments mit LLM models. I get the need for rules and restrictions etc. But there must be a way to not prevent people from doing their work. There is all this mandatory security training and tools etc. But I think there should be a way for people to somehow prove that they know what they’re doing and get rid of the usual corporate shackles that prevent you from doing stuff and cripple your expensive corporate hardware.
Baltic Sea
For the rest of the week (which was the Berlin winter holiday week for schools) my daughter and me took off to Usedom island. This time we stayed in Świnoujście on the polish side of the isle.
The weather left room for improvement the first two days with grey clouds, cold wind and drizzle. But that’s kind of what we came for. The sea is awesome at any weather and our hotel had a Spa area where we spent time in the pool and sauna 🙂 I also got a fair bit of reading done on my kindle in the sauna. “The dawn of everything” is really good.
We explored the local restaurants and roamed around the city, beach and piers a bit.
On Saturday we took a trip to the German side and payed Gulliver a visit and had good pizza on the pier restaurant in Heringsdorf.
In the afternoon the sun finally came out and people were flocking to the beach for a walk. We saw the TF Line ferries coming in and eventually had really good Sushi at the Hilton Hotel Sushi Bar and Grill.
Overall a rather relaxing four days at the Baltic Sea.
But there was not a single #meshtastic node on the whole Baltic Sea. I was carrying my T-Echo the whole time to check for any nearby radios. Nothing, nada, zilch.
#BalticSea #beach #enEN #food #meshtastic #Poland #Usedom #Vacation #weekly #weeklyreview
Myalgic encephalomyelitis or chronic fatigue syndrome (ME/CFS)
Read about myalgic encephalomyelitis (chronic fatigue syndrome or ME/CFS). It’s a long-term condition with a wide range of symptoms including extreme tiredness.NHS website (nhs.uk)
Rejuvenation for Pro Senectute through NFT and Metaverse?
Pro Senectute beider Basel, a foundation to help the elderly around Basel, launched its NFT project last week and already informed about its Metaverse commitment beforehand. According to a media release, Michael Harr, managing director of the 15-million Basel-based company, wants to use the purchase of these “properties” in a “central location” in two online worlds to promote solidarity between the generations and enable older people to use current and future digital technologies, promote their integration and reduce social isolation.
Diesen Artikel gibt es auch auf Deutsch 🇩🇪. Or read more blockchain-related stuff.
We wanted to take a look at what “central location” actually means in an online world called “Decentraland”. Decentraland measures 5 km by 5 km and the Pro Senectute presence can be found 500 m from the edge of the city, i.e. on the outer 10% of the buildable space. The view from the roof of the anarchy café right next door also screams anything but “central!”: a deserted environment as far as the eye can see and on the other side of the street a seemingly endless steppe. Neither walking around nor several return visits did change that.
But Michael Harr’s vision goes far beyond today: “Where will we reach older people in the distant future and how do get to interact with them?”
A behind-the-scenes look at metaverses and NFTs help us understand whether this goal can be achieved.The view from the roof of the neighboring Anarchy Café of the Pro-Senectute double parcel in Decentraland (bottom left, black), the main road (center), and the steppe landscape (right)
Table of Contents
- But what is the Metaverse?
- What is an NFT?
- Goals reached?
- NFT goals
- Metaverse goals
- A. Participation in progress?
- B. Prediction for 40 years?
- C. Central location?
- D. Reduce isolation?
- E. Experiences despite limitations?
- F. Digital course/consultation center?
- NFT goals
- Conclusion
- Blockchain ecosystem
But what is the Metaverse?
According to Wikipedia, a metaverse is a digital space created by the interaction of virtual, augmented and physical reality. Neal Stephenson created the term 30 years ago for an online world that allows its users to escape, at least temporarily, from the dystopia of “Snow Crash.” The term “metaverse” became known to a wider audience when Facebook was renamed Meta.
More than 200 such virtual 3D experience spaces are offered by various companies today. Analogous to social media platforms, these metaverses also try to differentiate themselves from each other through incompatibility and lock-in mechanisms: “customer loyalty” by means of walled gardens.
Accordingly, customers also have to make multiple commitments. Pro Senectute beider Basel has acquired “plots of land in two of these metaverses, Decentraland and The Sandbox, with the vision of one day building a virtual course center or a virtual counseling center with a meeting place there” (Harr). Currently, metaverse architects are still being sought for this.
Let’s take a closer look at these two metaverses.
Metaverse 1: Decentraland
View from the street to the construction site with a stylish “Under Construction” sign (left of the lettering). Neighborhood, from left to right: Anarchist Café, Menstruation Pyramid, Canadian Law Firm.
For new Decentraland users, getting started is a snap if you speak English: Visit play.decentraland.org in your browser and select that you want to play as a guest (and, depending on your computer, dismiss the message that your graphics performance is insufficient). Some loading time later you are greeted by an introduction that explains how to roam the virtual world with a combination of mouse and keyboard known from first person shooters. With a jump into the cold water (literally) you arrive in the city center with many buildings, but not a single other player.
Inexperienced in first person shooters, slightly disoriented, and annoyed by the sometimes suddenly materializing walls, I try to locate the “central location” (coordinates -38,120 and -38,121) on the map.Decentraland map at maximum zoom: Still not enough to see the Pro Senectute “plot” (it is far further “north”).
Despite the maximum zoom level, the map only shows a good third of the way to the Pro Senectute. In the chat, however, you can enter “/goto -38,120
“, as one of the omnipresent in-game advertising posters will explain to me later. (The owners of some Decentraland parcel want to lure me from the “central” Pro Senectute vicinity to their actual location, probably also somewhere else “central”).
The double lot currently consists only of a black wall, with a floating(!) writing “Pro Senectute beider Basel” in front and a shy “Under Construction” sign next to it, evoking memories of 90s websites.
The environment of the online presence (or better, absence?) of Pro Senectute is still relatively empty. Why the choice fell on this “site” is not clear to me. Neither the neighboring Canadian law firm, nor the advertising for a menstruation app behind it really seems tailored to Swiss seniors.The view from 50 m “southeast” of the parcels.
But maybe some of the male senior citizens will admire the oversized pin-up girl of the NFT discount store a few steps in the southeasterly direction of the plot. At least one can spend some of the waiting time until the opening of the course center.
Metaverse 2: The Sandbox
Sandbox map showing the parcel and neighborhood.
For The Sandbox, you are forced to create an account, which we managed only after several attempts. Then it turns out that a separate application is needed, which is not compatible with my computer.
A colleague has a compatible computer and can finally start playing after several gigabytes of download. However, the operation of The Sandbox is not intuitive enough for him to navigate to the “premium” coordinates of Pro Senectute.
If two computer scientists are already reaching their limits, what does it look like for senior citizens? Presumably, they first have to go to the physical course center in downtown Basel, where they are then told how to get to the virtual course center.
Early metaverses
Second Life is greeting more friendly, both in language (German) and graphics.
Since the 1980s, online games have been used for the first social interactions, at that time still text-based. Online worlds then experienced a somewhat greater degree of popularity starting in 2003 with the 3D world of Second Life or with online role-playing games like World of Warcraft. Today’s metaverses hope to escape this niche existence.
Both Second Life and World of Warcraft, though much older, feature more realistic (and in my opinion, friendlier) graphics than the Pro Senectute selection.
Interaction metaverse
The floor plan of a community WorkAdventure (left, with a glimpse at the expansive garden outside the office) with our avatars. Meeting in the lounge automatically and intuitively starts a video conference among those present.
Spring 2020 marked the beginning of home office for many. With Workadventure, a few Frenchmen created a platform that allowed them to recover some of the social aspects of their usual office into their home office. The result was an open platform where anyone could create their own 2½D scenes. Most implementations recreated office worlds with meeting rooms and lounges. Anyone hanging around in the same lounge would automatically join a joint video conference, a very natural form of interaction. In addition, however, there is also possibility to connect objects with websites or other workadventures: Pinboards can thus display information from the intranet or doors lead to other Workadventure metaverses, in the open style of the web and its links. All with the goal of enabling informal social exchange in clear yet friendly virtual worlds.Workadventure scenery is not restricted to boring office worlds, but can also be colorful, such as in this world targeted at school children.
What is an NFT?
The Pro Senectute beider Basel online presence will be funded through NFTs, artworks designed specifically for this campaign that can be traded on the blockchain.
“NFT” stands for “Non-Fungible Token”, roughly a “non-exchangeable token”. The “non-exchangeable” serves to distinguish it from “normal”, exchangeable tokens such as digital coins: Any digital dollar can be exchanged for any other digital dollar and no one will notice.
These NFTs are being marketed by some as the future of digital art (and for much more). For this purpose, a program is stored on the blockchain, a so-called “smart contract”, through which these works of art can be traded. These trade changes and the payments made for them are also stored back on the Blockchain. Interestingly, however, the artwork itself is not in the blockchain; only the URL (web address) is stored in the smart contract.
Thus, an art NFT and its smart contract:
- Public: A public, normal web link (URL), usable by anyone, to a computer file of a work of art that can be changed at any time.
- Perpetual: The NFT is managed by a public “digital contract” (“smart contract”). These smart contracts and each associated transaction are elaborately (and contrary to ideas of privacy) kept publicly and virtually immutable in a blockchain “forever.” (But that alone is not enough for them to also retain their function).
- Program: This smart contract is a piece of cryptic program code that often is not even properly understood by its developers, let alone estimate its consequences (Background).
The NFT (and the underlying smart contract), on the other hand,do not provide(sometimes or always):
- Right of use:The usage rights which you gain from this NFT or its contract are rarely precisely defined.
- Hurdle: The contract, however, does not pose a technical hurdle to viewing or even downloading the artwork. On the contrary, it points every blockchain user to where the artwork can be downloaded.
- Trustworthiness:Some of these smart contracts “simply” charge a fee when they are resold (sometimes as a surprise); others are actively malicious (e.g., they steal crypto-money or NFTs); quite a few are buggy and can be abused by hackers. An ordinary user does not have the tools to determine this before bindingly interacting with the contract.
- Bug fixes: Many smart contracts do not offer the possibility of a software update, which could correct errors or security vulnerabilities. This is because this update option can also be (and has already been) abused.
- Goodwill: There is no cure for faulty or malicious smart contracts. Not even against simple typing errors during the transfer. Recourse is impossible, so is an appeal to a court. Hoping for generosity on the part of the other party is also impossible by design: The smart contract program, once started, acts completely stubbornly and autonomously and cannot be influenced by anything or anyone (“Code is Law”: The program code is the only valid, first- and last-instance law).
- Contract: This “digital contract” probably does not even correspond to what the Code of Obligations understands by a contract, since consent to such an incomprehensible construct cannot be seen as an expression of will. In the event of deceipt (which is easily possible with smart contracts), there is also no possibility for subsequent correction, for example by annulling or correcting the contract.
- Uniqueness: The alleged uniqueness of NFTs is an illusion. Likewise, without additional, sometimes time-consuming research, it is not possible to verify whether the creator of the NFT is also the owner of the copyright or right of use to the work of art. (In other words, the fact that something is sold as an NFT says nothing about its authenticity or whether it is copied/faked).
- Free of charge: Transactions around smart contracts (even just moving them between the owner’s left and right pocket) cost money, or more precisely, “cryptocurrencies”. These are not currencies in the classic sense, as they lack both a functioning monetary cycle and the necessary stability.
- Value: Quantifying the value of art reliably and objectively is already an impossibility. It becomes even more difficult when the yardstick for this, the cryptocurrency, is itself highly speculative and can only be kept alive, as it were, by a snowball system.
- Simplicity and democracy: This blockchain was created in 2008 as an alternative to an economic system dominated by a few people and characterized by greed, inefficiency, intransparency and excessive complexity. However, the economic system created by blockchains is riddled with concentrated greed, inefficiency, intransparency, and unnecessary complexity. And, despite the regularly repeated promises of fairness and democratization through the blockchain, the financial power behind it is in the hands of very few people with sometimes very strange ideas about the society we should all live in.
In short, a very complex, opaque and unclear system is being set up without any possibility for appeal or mercy, which tramples on our rights to privacy and – despite all protestations of fairness and distribution – actually only benefits a few (newly) rich.
The use of NFTs does not fundamentally change the problems of funding the arts or nonprofit organizations: Neither the often unfair access to audiences, the often unfair pay, nor the often difficult access to upfront investment. It also isn’t a step into an entirely new arts culture or arts funding culture. (Or, in the case of Pro Senectute, donation culture).
On the contrary, in my opinion, it is not only without benefit, but even harmful to society in the long term.
Goals reached?
Now, do NFTs and the selected metaverses fulfill the goals of Pro Senectute beider Basel? Let’s first look at the goals around the NFTs:
NFT goals
A. Unique property?
In its media release, Pro Senectute beider Basel writes: “An NFT maps a digital certificate of ownership and thus represents a unique, non-copyable original on the blockchain.”
Let’s take this one step at a time:
Certificate of ownership: What rights one acquires to the work through the action is not clear from either the media release or the NFT website. My request for clarification was not answered.
Unique: Thereare up to 2512 copies of the individual works. Can one really speak of “unique”‽
Not copyable: The NFT stored on the public blockchain only points to a “normal” (also public) URL. A download (and thus the creation of a local copy) is therefore possible from any computer without restrictions.
Original: In the digital world, there is no difference between original and copy.
On the Blockchain: the works themselves are not on the Blockchain, only references and proofs of purchase.
In short: the sentence from the media release bears—generously speaking—barely any relation to reality.
B. Asset value?
Also mentioned in the media release are “unique virtual assets”. Again, this raises questions about the valuation of the NFT within a volatile or illiquid secondary market and thus whether the NFT really represents an asset.
I would not want the three most common NFTs even if gifted, even less so for money. (Nevertheless, I bought one of the NFTs — for analysis purposes).
C. Paper Wallet
The NFTs are sent as a “paper wallet“; a QR code printed on high quality paper for viewing the work and a scratch area which initially hides the information required for resale.
Such a paper wallet has the advantage that the buyers do not have to take care of the cryptocurrencies and crypto wallets (=online cryptocurrency account). However, it also has the disadvantage that one only finds out after a fee-based “resale” to oneself whether one has really acquired a right to this NFT or whether the same envelope with the same access data was simply sent out 4444 times.
Also unclear is how many hands the secrets under the scratch field passed through before the scratch was applied (contogeneration, transmission, printing process) and how many people could thus also lay a digital claim to the artwork. (The blockchain does not say who is the rightful owner now; any person who knows the secrets under the scratch-off can resell it on their behalf).
In response to my question, Michael Harr explained that he was prepared. A print shop had been chosen that also handled sensitive data (e.g. lottery tickets) and could guarantee confidentiality. In addition, further security measures were taken. The paper wallets also had some credit so that no additional costs would be incurred for the first transfer of ownership worth around one centime.
However, the question about the rights acquired with the NFT and the characteristics of the smart contract remained unanswered and thus unclear.
D. New channels?
Does this action really bring in additional revenue from “new channels”? Since the beginning of the year, the NFT market has slumped by 97%. Despite this, “the land costs have practically already been recouped” through the NFT sales, according to Harr.
Pro Senectute beider Basel will inform about the outcome after the end of the activity, which will last at least until spring. Only then we will have a chance to determine whether this has led to additional revenue or whether those willing to donate have simply changed their method of giving.
But even that conclusion will hardly show whether these new channels are sustainable or just a flash in the pan.
Metaverse goals
A. Participation in progress?
The media release motivates the creation of these “Metaverse branches” with “let[ting] the older generation actively participate [in] technological advances.”Is this the sight of the future? (Splash screen for The Sandbox)
Both platforms try to motivate players to spend cryptocurrency on all sorts of in-game extras. If you are not careful, you can quickly lose a lot of money.
In particular, The Sandbox seems to be a Minecraft clone trimmed for monetization, coming across on the splash page with the combined charm of a back-alley gambling joint and a Formula-1 overall.Monetization: You are not worthy to be saved unless you finish the level.
We beg to question whether these two rather bleak, English-only worlds with their obtrusive monetization, omnipresent advertising, and sometimes rather unintuitive interface are really (1) the ideal environment to lure seniors into and (2) give these platforms as well as the underlying cryptocurrencies legitimacy through one’s presence.Decentraland is full of advertisements. Including advertisement for advertisements. And advertisement for NFTs. And then even more advertisement for NFTs.
The abilities to critically scrutinize, tend to decline with age (think of the inglorious grandparent scams). Exposing possibly cash-strapped seniors to such heavily monetized platforms seems dangerous, to say the least, especially if other participants on these platforms then try to lure these seniors with possible extravagant profits (through money games or NFT “investments”).
B. Prediction for 40 years?
In the shockingly uncritical SRF News article, Michael Harr’s target audience is quoted as “people who are 30 or 40 years old today will use when they themselves are of retirement age”.
I think it is extremely daring to try and predict today which technologies will still be relevant in 30 to 40 years.
Looking back, 40 years ago, the end of the Iron Curtain was unimaginable and the Internet in its current form had not yet been born; 30 years ago, the WWW was just one year old and known only to some insiders. Today, hardly anyone can imagine that smartphones with touchscreens have only been around for 15 years, much shorter than the forecast period.
It is impossible today to make predictions about whether a specific platform of a given technology selected now will even exist then, especially when even large platforms face an uncertain future.
C. Central location?
“They are central locations in the metaverse, comparable to downtown locations in the real world,” says Michael Harr, executive director of Pro Senectute beider Basel.
In the metaverse, you can be teleported to any possible coordinate at any time within a few seconds. At first glance, this seems to imply that the concept of “central location” completely loses its meaning we have from the real world. Nevertheless, there are significant price differences between properties at different locations. Researchers at the University of Basel, for example, have analyzed the prices of the original auction that was used to find many Decentraland “properties” their first owners five years ago.
They published their conclusions last year, long before the current Pro Senectute purchases: Buyers were willing to pay significantly higher prices for a “good location” in 2017.
Fabian Schär, one of the authors of the study, explains: “The metaverse is an attention economy. Similar to the physical world, those parcels that have a high density of visitors, i.e. are located near large squares, main roads or important districts, are particularly sought-after. In addition, parcels with memorable coordinates also fetch significantly higher prices. This aspect is reminiscent of markets for domain names where similar effects can be observed.”
Regardless of location, all land purchases in the Metaverse are high-risk investments, he adds. “As interesting as the subject matter may be, potential buyers of virtual land parcels must always be aware that they can lose everything.”
According to a recent study by Wemeta, the value of The Sandbox and Decentraland properties has plummeted by 80% and demand has also declined, in Decentraland by a factor of 6.
How “central” is the location really? The Pro Senectute properties in Decentraland are only 10% away from the “northern” edge.
The coordinate -38,120 doesn’t seem particularly memorable either, so that can’t be it either.
The Canadian lawyer’s office or the menstruation app “west” of the Pro Senectute location are also unlikely to attract many walk-in customers. Nor will the aforementioned abandoned Anarchy NFT Café with its viewing terrace or the NFT Discount with its pin-up girl storefront. The use of the buildings in the direction of “north” is unclear and in the direction of “east” is undeveloped steppe as far as the eye can see.
“Central?” Apparently not. Further research shows: The steppe to the east is actually a huge university campus. The map shows no points of interest, so I take a stroll. Some zig-zagging later, I discover an (empty) building with a periodic table and a lectern. But even that will hardly attract streams of visitors.
D. Reduce isolation?
In the press release, Michael Harr, the foundation’s manager, is quoted as saying: “We want to promote the integration of the older generation and reduce social isolation.”
The seniors I interviewed expressed their belief that such worlds would promote isolation instead. Some had already enjoyed using an online Jass app remotely. Together with a parallel audio or video call, this had almost recreated the pleasurable feeling of sitting at a table together.
For seniors who are no longer confident enough to operate a tablet, there are also approaches for senior-friendly video telephony solutions.
E. Experiences despite limitations?
“Thanks to VR glasses, people with disabilities can experience things that are not possible in analog. It is conceivable, for example, that an elderly person in a wheelchair can experience the view from a mountaintop thanks to VR glasses,” explains Michael Harr in a response to my inquiry.
A very laudable goal, but do we need a Metaverse for this? Back in 1978 (!), a 3D virtual tour of Aspen was created. Without having to raise from the armchair (or wheelchair), it was possible to explore the entire city. Completely without Metaverse. More than 25 years ago, the necessary technology became available on widely used computers.
The view from the top of the mountain (or even the way there) could be experienced without a metaverse, without anything but a web browser. Not every 3D representation has to be locked into a commercial platform. “Locking in” also aptly describes the minute amounts of data that can be stored for a Decentraland parcel; far too little for a realistic mountain panorama. (This limit also explains why the five year old world still shows only minimalistic graphics).
F. Digital course/consultation center?
The Decentraland property of Pro Senectute is 16×32 m². The (small-looking) auditorium mentioned above is twice the size; some form of classroom could certainly be accommodated in half of it. The question is rather, how would courses be held there? So far, there seem to be no mechanisms in Decentraland that support knowledge transfer, even ignoring the already mentioned very restrictive storage limits). It is also hard to imagine that better knowledge transfer than with specialized tools (or even just a video conference) will ever be possible in these metaverses.
Conclusion
The fundamental question is whether a world of hypercapitalism is the right environments to attract seniors into: Their more or less subtle ubiquitous monetization of almost every corner, their aspects of gambling, and their attempts at sensory overload, especially through omnipresent advertising does not seem a platform I would feel comfortable advertising to a wide range of seniors. Instead, if I had the say at an organization catering to a wide spectrum of elderly people, I would propagate a reduced, more comprehensible, intuitive and user-friendly environment, catering for the reduced receptiveness and critical faculties of age.
Seniors I interviewed would rather enjoy the sun than invest so much time in a confusing system. For digital communication, they would prefer the telephone or a video call. Incidentally, such a call is also an excellent basis for starting a joint online Jass.
None of these statements should be taken as endorsement to keep interested seniors from exploring online worlds, on the contrary. My respondents, if they had to visit one of the Metaverses, would prefer the more intuitive, natural environment of Second Life over Decentraland any day.
Many of the other statements of Michael Harr were — as explained above — hard for me to understand. Therefore I asked the Pro Senectute beider Basel for further information. I hoped that they could give me information about concepts or background considerations, the total costs and their financing. If the initial financing was “not from donations”, then where was it from? (Hopefully also not from the operation of the retirement homes or the earmarked public funds, the three together making up about 90% of the foundation’s budget). Likewise, I asked for information about what the features (and costs) of the smart contract were, as well as how the security of the production process of the paper wallet would be ensured. The response 1½ weeks later rarely addressed my specific questions. A commendable exception is the explanation of the paper wallets and that the wallets have been topped up with sufficient cryptocurrency so the buyer can transfer ownership once without going through further hoops or expenses.
As with many other projects in the blockchain environment, the question about who will benefit from the project remains unanswered: Will only the early entrants and the consultants earn anything, or will it add a sustainable value to our society.
Part of this ambiguity probably arises from the lack of clear distinction in communication between what is being worked on today and where Pro Senectute beider Basel sees itself in 30 or 40 years.
Many ambiguities could have been clarified by a statement such as, “With a portion of the maximum 300,000 francs from the NFT proceeds, we want to explore how we might best interact with seniors a few decades.” At present, neither such a statement nor apparently a sufficiently deep understanding of what the various envisioned technologies can and definitely cannot do seem to exist.
If hyped technology and its effects are only superficially understood, there is a great risk of incorrect assessments and wrong decisions. It remains to be hoped that, despite the signs to the contrary, the added value for society will prevail in this case. And perhaps there will be some money left over for the current generation.
Blockchain ecosystem
More posts in the blockchain ecosystem here, with the latest here:
The year in review2023-12-23
This is the time to catch up on what you missed during the year. For some, it is meeting the family. For others, doing snowsports. For even others, it is cuddling up and reading. This is an article for the latter.
NFTs are unethical2023-07-18
As an avid reader, you know my arguments that neither NFT nor smart contracts live up to their promises, and that the blockchain underneath is also more fragile and has a worse cost-benefit ratio than most believe. Similarly, I also claim the same for the metaverses built on top of them all. And that the… Read more: NFTs are unethical
Inefficiency is bliss (sometimes)2023-07-15
Bureaucracy and inefficiency are frowned upon, often rightly so. But they also have their good sides: Properly applied, they ensure reliability and legal certainty. Blockchain disciples want to “improve” bureaucracy-ridden processes, but achieve the opposite. Two examples:
The FTX crypto exchange and its spider web2022-12-14
Yesterday, the U.S. Securities and Exchange Commission (SEC) released its indictment against Sam Bankman-Fried. It details the financial entanglements of FTX, Alameda Research and more than a hundred other companies and individuals. We have tried to disentangle these allegations somewhat for you.
Web3 for data preservation? (Or is it just another expensive P2P?)2022-11-19
Drew Austin raises an important question in Wired: How should we deal with our accumulated personal data? How can we get from randomly hoarding to selection and preservation? And why does his proposed solution of Web3 not work out? A few analytical thoughts.
Rejuvenation for Pro Senectute through NFT and Metaverse?2022-10-24
Pro Senectute beider Basel, a foundation to help the elderly around Basel, launched its NFT project last week and already informed about its Metaverse commitment beforehand. According to a media release, Michael Harr, managing director of the 15-million Basel-based company, wants to use the purchase of these “properties” in a “central location” in two online… Read more: Rejuvenation for Pro Senectute through NFT and Metaverse?
Bitcoin: Hauptsache irgendwas mit Krypto
An Bitcoin und Blockchain verdienten bisher die, die früh dabei waren. Nun wollen selbst Firmen profitieren, die eigentlich Zigarren oder Eistee herstellen.Nils Heck (ZEIT ONLINE)
Bitcoin Block Timing Statistics
Bitcoin wants to be a universal payment means, providing rapid transactions. Here is an analysis on the blockchain timing, based on their timestamps.
More on blockchain in this overview and in the simple, humorous, yet thorough article «Hitchhiker’s Guide to the Blockchain».
Table of Contents
- Data sets
- Negative durations (data set 0)
- Block mining interval distribution (data set 3)
- Block confirmation delays
Data sets
Data set 0: Everything
This data set includes all 725 287 blocks created up to 2022-02-28 14:13
(all times UTC), the time at which the data was extracted.
Data set 1: All but genesis
Every start has its own teething problems. For example, the Bitcoin “genesis block” (the first block, numbered 0, the one without ancestors) was created on 2009-01-03 18:15:05
, according to the timestamp embedded in it. The next block, number 0, was only created 5½ days later, 2009-01-09 02:54:25
. The following 13 intervals range from around 1½ to 13½ minutes, in the expected range. Therefore, the first interval of 5½ days was removed from the data set.
However, the 15th interval, of just over a day (24:12:37
), is included, as there are 3 more such events in 2009, dated in May and June.
Data set 2: 2010+
The year 2009 includes 9 intervals of 10 hours or more, which none of the other years have. Also, the entire blockchain history lists 152 intervals of more than two hours, 137 of which are in that “genesis year”. (These 2+ hour intervals continue into 2021, with two events listed there.) Therefore, a data set 2 is created, which includes all but the genesis year, 2009.
Data set 3: 2020+
A last dataset only includes the last roughly 26 months, ranging from 2020 to today (2022-02-28). This is meant to represent the current state, but might have a too narrow spread.
Negative durations (data set 0)
2 % of the blocks show a negative interval length. This is possible, as the timestamps inside the blocks are created by the winning miners themselves, and are in no way externally validated. Possible reasons for these negative interval lengths include unsynchronized clocks, not updating the time field while updating the hash, or someone purposefully manipulating the hash (although I do not see an obvious benefit for doing this).
The following table lists the number of negative intervals in a given year. Most of those intervals are one-off, i.e., while the timestamp at block i+1 may show a time before that of block i, the one at i+2 is again in the right order (i.e., after both i and i+1). The latter events are labelled as “double negative”. Three or four consecutive blocks without a time ahead of the first are labelled as “triples” and “quadruples”. Especially for the latter categories, it is more likely that the first block had a timestamp wrongly set into the future.
Year | Negative | Double | Triple | Quadruple |
---|---|---|---|---|
2009 | 524 | 41 | 23 | 14 |
2010 | 1050 | 106 | 46 | 36 |
2011 | 629 | 132 | 61 | 36 |
2012 | 2730 | 711 | 258 | 128 |
2013 | 1516 | 264 | 86 | 49 |
2014 | 3936 | 637 | 198 | 72 |
2015 | 2442 | 509 | 118 | 24 |
2016 | 482 | 83 | 16 | 1 |
2017 | 338 | 16 | 5 | 1 |
2018 | 223 | 6 | 1 | |
2019 | 230 | 6 | 2 | |
2020 | 171 | 7 | 3 | 2 |
2021 | 122 | 5 | 1 | |
(2022) | 15 | 1 | 1 |
Annual number of blocks whose time puts them before their parent or ancestors.
We can clearly see that the number of backward timesteps in the past four years has greatly improved.
The following table lists the median and maximum size of these backward steps.
Year | Median | Maximum |
---|---|---|
2009 | 01:53 | 01:04:56 |
2010 | 00:42 | 01:58:35 |
2011 | 01:10 | 01:58:45 |
2012 | 02:14 | 01:30:33 |
2013 | 01:20 | 01:43:21 |
2014 | 02:17 | 01:32:48 |
2015 | 02:10 | 01:04:20 |
2016 | 01:08 | 11:16 |
2017 | 00:18 | 09:05 |
2018 | 00:09 | 12:28 |
2019 | 00:09 | 08:15 |
2020 | 00:44 | 31:51 |
2021 | 00:56 | 06:46 |
(2022) | 00:11 | 17:55 |
Annual median and maximum reverse time intervals (seconds accuracy) in the Bitcoin blockchain.
Up to 2015, maximum clock offsets of one to two hours were possible, with half of the backsteps in the one to two minute range. Since 2017, the median backward step is less than a minute, in many years only on the order of ten seconds. Maximum errors since 2017 never exceeded 32 minutes, often less than ten minutes.
Given that this is expensive high-tech equipment dealing with large amounts of money and being touted as the basis of the future of finances, document timestamping, and much more, it is baffling that time is not better synchronized, given that time synchronization with possible sub-second accuracy is a 1970’s technology, standard on most computers today.
Timestamp accuracy estimation
The section above is just to show potential sources of errors, as all analyses rely on the timestamps recorded in the blocks themselves, with no ground truth available. Assuming that the sizes of the backwards steps are a good first approximation of the clock errors, the maximum error for blocks minted before 2016 should not exceed two hours, for blocks minted afterwards, they should not exceed 30-40 minutes.
Over the entire lifetime of the Bitcoin blockchain, about 2 % of the intervals have a negative duration.
However, in recent times (data set 3: 2020+), only 308 out of 114 596 intervals have negative duration, less than 0.27 %, or, on average, 1 out of every 373 blocks. More than 90 % of those reverse steps in data set 3 are shorter than a minute; only 16 out of the 114 596 intervals (0.014 %) go back more than one minute. For data set 3, we can therefore safely assume that an overwhelming majority of intervals is accurate to a minute or less; also, for the other data sets, such an assumption likely holds true.
Block mining interval distribution (data set 3)
On the right, you see the probability distribution of the interval length over data set 0. As the distribution has some strong tails, the outermost 0.01 %, the shaded areas, have been horizontally enlarged by a factor of 1000. (Click on the image to see it in full resolution.)
The center 99.98 % of the distribution cover time differences between -58 minutes and +3 hours and 11 minutes (all data under this heading are rounded to minutes, no seconds). However, there is a 1:5000 chance (on average, about every 20 days), that an interval is outside of those boundaries, extending to between -1:59 and 25:08, or, roughly, -2 hours to +25 hours.
Other key data points: ±0 is at 2 %, +2 minutes at 19 %, +10 minutes (the Bitcoin target) at 65 %, +20 at 88 %, +30 at 96 %, +40 at 99 %, +1 hour at 99.73 % (i.e., more than once a day, on average, you have to wait more than an hour for the next block).
Data set 2 (2010+) still looks essentially the same at the lower end, but more benign at the higher end. The maximum is 6:51, just below 7 hours, the 99.99th percentile at 1:37. But again, the one hour mark is at 99.79 %, still about one interval of more than an hour every two days.
Data set 3 (2020+) shows further calming on both ends: Extremes are now -32 minutes and +2:19 hours, with the inner 99.98 % spanning -1½ to +1:45. The shaded areas only cover 11 intervals each, as the total number of intervals during these 26 months is only 114 595.
The average interval duration is about 9.8 minutes, very close to the 10 minute target. The median is at 6.9 minutes (actually, 6 minutes and 52 seconds)
±0 is at 0.27 %, +2 at 18 %, +10 at 64 %, +20 at 87 %, +30 at 95 %, +40 at 98 %; i.e., with the exception of ±0, they all moved left just slightly.
The mining process is a Poisson process; its interarrival time is 10 minutes. The associated Poisson distribution has the interarrival time at around 63 % (actually, at 1-1/e), which closely matches our observation.
Block confirmation delays
The rule of thumb to accept a block as “confirmed” for “normal” purposes is to wait until it reaches a depth of 6. (For important transactions or if there might be a powerful adversary involved, higher numbers such as 60 are recommended.)
If you submit a transaction now, you have to wait for this transaction to be incorporated into a block (which might or might not be the next block being added) and then wait for 5 more blocks to be mined.
As an approximation, we assume that it takes 6 intervals between submitting your transaction and it being considered “confirmed” for most purposes.
Therefore, the period of 6 intervals is also interesting. It’s distribution (based on data set 3, i.e. 2020+) is shown on the right.
The minimum 6-interval duration seen in the past 26 months is 2 minutes, the maximum 6:02 hours. The 99.98 % center ranges from to 0:07 to 3:44, with 2 % corresponding to 20 minutes, 22 % corresponding to 40 minutes; and 57 %, 81 %, 93 %, and 97.7 % corresponding to 60, 80, 100, and 120 minutes, respectively.
So, on average, on about 6 of the 240 daily intervals, you have to wait longer than 2 hours before your block is confirmed. Given that this 26 month dataset contains 112 intervals longer than 3 hours, on average, about once peer week you confirmation will take longer than 3 hours.
In summary, neither the block generation intervals, the likelihood of your transaction being included in a particular transaction, nor the duration of the confirmation interval are reasonably predictable. In my opinion, this alone puts serious question marks on the claim that Bitcoin should become a replacement for traditional money.
#Timestamps #Blockchain #Bitcoin
Transparent, Trustworthy Time with NTP and NTS
«Time is Money», as the old adage says. Who controls the time, controls all kinds of operations and businesses around the world. And therefore, controls the world. Today, we all take accurate time for granted. Even though, today, it is delivered over the Internet mostly unsecured. But this is easy to change.Switching from NTP to NTS is as easy and important as moving from HTTP to HTTPS.
(Part 1 in the NTS series.)Table of Contents
- What could possibly go wrong?
- Reliable time
- Installing NTS capable software
- Configuring Chrony or NTPsec
- Raspberry Pi caveats
- Public NTS-capable servers
- More servers
- History
What could possibly go wrong?
In the old times™, accuracy in the order of decades (“during the reign of King Herod”) or years (“a boy of seven summers”) might have been good enough. Today, hours and minutes are the minimum for many processes and in some cases, even milliseconds are too coarse.Anything from industrial control systems over (distributed) file systems to everything even remotely related to security depends on time and will break one way or another if time is wrong: HTTPS/TLS/X.509 certificates, PGP keys, DNSSEC, two-factor authentication, overwriting or expiring backups, and even vaccination certificates, to name just a few.
Reliable time
Different applications have different requirements on time: For some, such as rate measurement devices, a coherent pace may be important. For others, including certificate validity checks, the actual, absolute, time is key; they can live with uneven speeds of time advancing.There might have been a time where exploring each application’s detailed reliability requirements might have been necessary. However, today, for the vast majority of everyday applications, the answer is easy: Just use authenticated NTP, aka NTS!
Installing NTS capable software
First, you need an NTS capable software. It appears that NTPsec and Chrony are currently the only choices. In this example, we use Chrony, but feel free to use NTPsec instead.
- GNU/Linux: Almost all distributions include NTPsec or Chrony prepackaged in a way that installing them will replace the preinstalled time synchronization service. However, only NTPsec>=1.2.0 and Chrony>=4.0 include NTS support.
Debian 11 (bullseye; also buster-packports), Ubuntu 21.04 (hirsute), Arch Linux, Fedora 34 and newer are among the distributions including sufficiently new versions.- Other Unix-like systems: Check for packages or build instructions; most BSD systems include a Chrony package.
- MacOS: Install ChronyControl (or, install Chrony with Homebrew and disable
timed
fromlaunchd
configuration).- Windows: Apparently, there is no NTS software support available there.
Some example setups:
Debian/Ubuntu
apt install chrony # Uninstalls other NTP software
Fedora
Already installed, nothing to do!FreeBSD
pkg install chronyecho chronyd_enable=YES >> /etc/rc.conf.local
The configuration file will live in/usr/local/etc/chrony.conf
.MacOS GUI
Download and install ChronyControl, then select Action → Install chrony. The configuration can be edited from the Gears menu in the window.MacOS Command Line
Install Homebrew, then run the following commands:
brew install chronysudo launchctl disable system/com.apple.timedsudo tee << "EOF" /Library/LaunchDaemons/org.tuxfamily.chrony.plist<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>org.tuxfamily.chrony</string> <key>Nice</key> <integer>-10</integer> <key>ProgramArguments</key> <array> <string>/usr/local/sbin/chronyd</string> <string>-n</string> </array> <key>RunAtLoad</key> <true/> </dict></plist>EOFsudo launchctl load /Library/LaunchDaemons# Only after you create /etc/chrony.confsudo launchctl start system/org.tuxfamily.chrony
The configuration file will be/etc/chrony.conf
; the service will not start without it.Configuring Chrony or NTPsec
Then, choose 3-4 NTS servers from the list of public NTS servers, below and add them to the configuration file (e.g.,/etc/chrony/chrony.conf
or/etc/ntpsec/ntp.conf
), making sure that they have thents
flag set. For a configuration in Switzerland, the following lines might be added to the configuration file (syntax compatible with both Chrony and NTPsec):
server ntp.3eck.net iburst ntsserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst nts
If you use Chrony, you may want to consider adding theauthselectmode prefer
directive on a line of its own to disable non-NTS sources in the presence of NTS servers.After your server chimes reliably, you can also consider removing the unauthenticated (non-NTS) sources (
server
,peer
, andpool
directives).Raspberry Pi caveats
The popular Raspberry Pi computers lack a real-time clock, as do several other single-board or embedded computers, including most home routers. This means that on power-up, these machines do not have the faintest idea what time it is, not even the year or decade. This makes it impossible to verify the validity of TLS certificates, as is required for establishing a trustworthy HTTPS or NTS connection. (On those machines, you might have seen “Invalid certificate” warnings, especially if you only connect it to the network after you have already logged in.)This results in a chicken-and-egg problem: The machine does not know the time, therefore, it cannot get a reliable time, as it relies on NTS verifying the validity period of the certificate.
If you can use Chrony, the easiest and most secure way out is to add the following configuration line:
nocerttimecheck 1
This instructs Chrony to not check the certificate validity time until it has set the clock the first time. Short of adding an RTC chip or GNSS (“GPS”) receiver, this is the most secure option possible. It beats the other option of adding enough non-NTS servers to make (entirely unauthenticated!) initial time setting possible. (If the time is ever maliciously off, it will never revert back to the real time!)Public NTS-capable servers
Currently, the number of public servers with NTS support still seems very modest.Therefore, I maintain a list of public NTS servers. As of 2023-08-12, it has been moved to its own page: Public NTS Server List.
More servers
NTS is an important upgrade to NTP. Hopefully, the number of NTP servers with NTS authentication will grow quickly. This should not be too hard, as all the infrastructure (e.g., Let’s Encrypt), tools, and know-how which has been used to migrate HTTP to HTTPS in the recent years can be reused. Upgrading an existing NTP server to NTS is even simpler as upgrading a web servers, as there are no problems with HTTP redirects and mixed content. (Basically, it is as easy as pointing NTPsec or Chrony to the private key and certificate chains.)If (or when) you run a public server with NTS, please let me know, so I can add it to this list.
History
2021-12-28: Added/improved install instructions for various platforms; added some additional servers and outlined listing policy.2021-12-31: Added
ntp.br
servers.2022-01-08: Listed Linux distributions with NTS-capable software.
2022-01-10: Added
0xt.ca
servers by agreement.2022-01-11: Added
time.signorini.ch
by agreement.2022-02-03: Added
nts1.adopo.net
,[url=https://www.jabber-germany.de]www.jabber-germany.de[/url]
, and[url=https://www.masters-of-cloud.de]www.masters-of-cloud.de[/url]
by agreement.2022-06-04: Added
[url=https://www.ntppool.org/scores/3.220.42.39]virginia.time.system76.com[/url]
,[url=https://www.ntppool.org/scores/3.134.129.152]ohio.time.system76.com[/url]
, and[url=https://www.ntppool.org/scores/52.10.183.132]oregon.time.system76.com[/url]
by agreement.2022-08-02: Added
[url=https://ntpmon.dcs1.biz/gpsd.php]ntpmon.dcs1.biz[/url]
by agreement. Added ntp3.fau.de. Updated NL to production servers. Updated the paragraph below the table.2022-09-04: Removed most 0xt.ca servers as these will be shutting down at the end of this month. time.0xt.ca will remain.
2022-09-26: Added
[url=https://system76.com/time/]{paris,brazil}.time.system76.com[/url]
by agreement.2023-08-12: Moved list out of this post to the Public NTS Time Server List page.
Why time is important in today’s protocols
Assuring consistent state at multiple sites is very hard in the presence of timeouts and errors. Most solutions require complex protocols, multiple round-trip times and/or large amounts of communication or storage. Therefore, instead of trying to achieve perfect synchronization such as two-phase commit, often a “soft state” approach is chosen, where the validity of state (e.g., credentials) expires after some well-defined period. This ensures that inconsistencies will self-heal, putting the burden on reliable time.
#Security #Timestamps #NTP #NTS
Why ninety-day lifetimes for certificates?
We’re sometimes asked why we only offer certificates with ninety-day lifetimes. People who ask this are usually concerned that ninety days is too short and wish we would offer certificates lasting a year or more, like some other CAs do.letsencrypt.org
How to block AI crawlers with robots.txt
If you wanted your web page excluded from being crawled or indexed by search engines and other robots, [url=https://www.rfc-editor.org/rfc/rfc9309.html]robots.txt[/url]
was your tool of choice, with some additional stuff like <meta name="robots" value="noindex" />
or [url=https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel#nofollow]<a href="…" rel="nofollow">[/url]
sprinkled in.
It is getting more complicated with AI crawlers. Let’s have a look.
Table of Contents
- Traditional functions
- New challenges
- AI crawlers
- Control comparison
- Example robots.txt
- Poisoning [Added 2023-11-04]
- References
Traditional functions
- One of the first goals of
robots.txt
was to prevent web crawlers from hogging the bandwidth and compute power of a web site. Especially if the site contained dynamically generated context, possibly infinite. - Another important goal was to prevent pages or their content from being found using search engines. There, the above-mentioned
<meta>
and[url=https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel#nofollow]<a rel>[/url]
tags came in handy as well. - A non-goal was to use it for access control, even though it was frequently misunderstood to be useful for that purpose.
New challenges
Changes over time
The original controls focused on services which would re-inspect the robots.txt
file and the web page on a regular basis and update it accordingly.
Therefore, it didn’t work well for archive sites: Should they delete old contents on policy changes or keep them archived? This is even more true for AI training material, as deleting training material from existing models is very costly.
Commercialization
Even in the pre-AI age, some sites desired a means to prevent commercial services from monetizing their content. However, both the number of such services and their opponents were small.
With the advent of AI scraping, the problem became more prominent, resulting e.g. in changes in the EU copyright law, allowing web site owners to specify whether they want their site crawled for text and data mining.
As a result, the the Text and Data Mining Reservation Protocol Community Group of the World Wide Web Consortium proposed a protocol to allow fine-grained indication of which content on a web site was free to crawl and which parts would require (financial or other) agreements. The proposal includes options for a global policy document or returning headers (or <meta>
tags) with each response.
Also, Google started their competing initiative to augment robots.txt
a few days ago.
None of these new features are implemented yet, neither in a web servers or CMS, nor by crawlers. So we need workarounds.
[Added 2023-08-31] Another upcoming “standard” is [url=https://site.spawning.ai/spawning-ai-txt]ai.txt[/url]
, modeled after robots.txt
. It distinguishes among media types and tries to fulfill the EU TDM directive. In a web search today, I did not find crawler support for it, either.
[Added 2023-10-06] Yet another hopeful “standard” is the “NoAI, NoImageAI” meta-tag proposal. Probably ignored by crawlers as well for now.
AI crawlers
Unfortunately, the robots.txt database stopped receiving updates around 2011. So, here is an attempt at keeping a list of robots related to AI crawling.
Organization | Bot | Notes |
---|---|---|
Common Crawl | CCbot | Used for many purposes. |
OpenAI GPT | Commonly listed as their crawler. However, I could not find any documentation on their site and no instances in my server logs. | |
GPTBot | The crawler used for further refinement. [Added 2023-08-09] | |
ChatGPT-User | Used by their plugins. | |
Google Bard | Google-Extended | No separate crawl for Bard. But normal GoogleBot checks for robots.txt rules listing Google-Extended (currently only documented in English). [Added 2023-09-30] |
Meta AI | — | No information for LLaMA. |
Meta | FacebookBot | To “improve language models for our speech recognition technology” (relationship to LlaMa unclear). [Added 2023-09-30] |
Webz.io | OmgiliBot | Used for several purposes, apparently also selling crawled data to LLM companies. [Added 2023-09-30] |
Anthropic | anthropic-ai | Seen active in the wild, behavior (and whether it respects robots.txt) unclear [Added 2023-12-31, unconfirmed] |
Cohere | cohere-ai | Seen active in the wild, behavior (and whether it respects robots.txt) unclear [Added 2023-12-31, unconfirmed] |
Please let me know when additional information becomes available.
An unknown entity calling itself “Bit Flip LLC” (no other identifying information found anywhere), is maintaining an interactive list at DarkVisitors.com. Looks good, but leaves a bad taste. Use at your own judgment. [Added 2024-04-14]
Control comparison
Search engines and social networks support quite a bit of control over what is indexed and how it is used/presented.
- Prevent crawling by
robots.txt
, HTML tags, and paywalls. - Define preview information by HTML tags, Open Graph, Twitter Cards, …
- Define preview presentation using oEmbed.
For use of AI context, most of this is lacking and “move fast and break things” is still the motto. Giving users fine-grained control over how their content is used would help with the discussions.
Even though the users in the end might decide they actually do want to have (most or all) of their context indexed for AI and other text processing…
Example robots.txt
[Added 2023-09-30] Here is a possible /robots.txt file for your web site, with comments on when to enable:
# Used for many other (non-commercial) purposes as wellUser-agent: CCBotDisallow: /# For new training onlyUser-agent: GPTBotDisallow: /# Not for training, only for user requests User-agent: ChatGPT-UserDisallow: /# Marker for disabling Bard and Vertex AIUser-agent: Google-ExtendedDisallow: /# Speech synthesis only?User-agent: FacebookBotDisallow: /# Multi-purpose, commercial uses; including LLMsUser-agent: OmgilibotDisallow: /
Poisoning [Added 2023-11-04]
A different, more aggressive approach, is to start «poisoning» the AI models; something currently only supported for images. The basic idea is to use adversary images, that will be misclassified when ingested into the AI training and will therefore try and disrupt the training data and the resulting model.
It works by subtly changing the image, imperceptible to the human. The result is, however, that e.g. a cat is misclassified as a dog, when training. If enough bad training data is ingested into the model, part or all of the wrongly trained features will be used. In our case, asking the AI image generator to produce a dog may result in the generated dog to look more like a cat.
The “Nightshade” tool that is supposed to be released soon, which has this capability, is an extension of the current “Glaze” tool, which only results in image style misclassification.AI Poisoning example from the Glaze team via Ars Technica
Judge for yourself whether this disruptive and potentially damaging approach aligns with your ethical values before using it.
References
- Neil Clarke: Block the Bots that Feed “AI” Models by Scraping Your Web Site, 2023-08-23. [Added 2023-09-30]
- Benj Edwards: University of Chicago researchers seek to “poison” AI art generators with Nightshade, 2023-10-25, Ars Technica. [Added 2023-11-04]
- The Glaze Team: What is Glaze?, 2023. [Added 2023-11-04]
- Didier J. Mary: Blocquer les AI bots, 2023. [Added 2023-12-31]
- Richard Fletcher: How many news websites block AI crawlers?, Reuters Institute, 2024-02-22. [Added 2024-09-27]
AI bots (OpenAI ChatGPT et al) - comment les bloquer - Didier J. MARY (blog)
Bloquer les AI bots (OpenAI ChatGPT et al - màj) - Pour ceux qui souhaitent protéger le contenu de leur site Web ou blog, des AI botsDidier J. MARY (Didier J. MARY (blog))
Solana Blockchain: Proof of History analogies
Here are some attempts to compare Proof of History (PoH), as used by the Solana blockchain, with Proof of Work (PoW). Comparisons to real-world activities, such as dice rolling or outer-space rubber-stamp throwing are also included.
More on blockchain in the simple, humorous, yet thorough article «Hitchhiker’s Guide to the Blockchain». I also wrote a shorter Solana/Proof of History explanation in German 🇩🇪.
Please note: This post represents my understanding of how PoH works; I have not been able to get positive/negative feedback about it; I’m still looking forward to confirmations or corrections.
[Added 2022-03-20] Before we look at PoH, let’s revisit some of the PoW analogies, parallel dice rolling and rubber-stamp throwing. Then, the differences between the PoW analogies and their PoH equivalents, serialized dice rolling and rubber-stamp throwing. Plus, you will find a micro-blockchain analogy as well.
Table of Contents
Proof of Work
Technical description
You may already be aware of the technical description of Proof of Work, as made popular by Bitcoin: A seemingly infinite army of processors tuned to perform SHA-256 calculations, repeatedly hashes the header of a candidate block, until a “small enough” value results.
“Small enough” is defined by the current difficulty, which is regularly adjusted (about every two weeks) such that the average time to mine a block remains at 10 minutes.
SHA-256 hashing (as every other cryptographic hash function) is a deterministic operation, so calculating the hash over the same block results multiple times will result in the same values. Therefore, the Bitcoin block header contains a Nonce field, which will be changed before every hash operation.
This operation can be fully parallelized, i.e., any number of hash processors can perform their calculations in parallel, without a need to coordinate.
Dice rolling analogy
At the current difficulty, about 120 x 10²¹ hash operations are required before a hash results meets the difficulty requirement (120 trilliard in the long scale; or 120 sextillion in the short scale). This is equivalent of rolling a dice with 120 trilliard sides.
As this is hard to imagine: If we divide the earth’s surface (510 million km²) by 120 x 10²¹, we get 42 cm². Thus, the earth could serve as an appropriate die, if each die side was 6 cm by 7 cm big.
(Ok, all right, it is still hard to imagine. But now, it is somewhat clearer that this is really an unbelievably humongous number.)
Outer space rubber stamp throwing
Another “useful” analogy might be to throw rubber stamps from space until they hit a randomly placed document’s 6 by 7 cm² stamp area (again, a chance of 1 in 120 trilliard per throw). This can be completely parallelized, given enough stamps and hands to throw them.
Proof of History
PoW combines consensus and progress into essentially a single mechanism. Solana, however, splits this into two:
- PoH is used to make and record progress at each node individually, while
- PBFT (Practical Byzantine Fault Tolerance) is used to reach consensus, merging the partial histories.
Technical description
Proof of History aims to avoid the energy waste associated with Proof of Work. Hash computations can no longer be parallelized, therefore, there is no incentive to have more than one processor per node performing the calculations.
The hash operations are strictly serialized, i.e., it is impossible to begin the calculations necessary for the next hash before the calculations for the previous hash have ended.
Also, there is no shortcut. Let’s look at a counter-example: If each operation were a linear operation, e.g., an addition with a constant C or a multiplication with a fixed factor F, several steps could be combined. I.e., hundred steps could be performed at the same time by just adding 100*C or multiplying by 100*F. By their very design, cryptographic hash functions need to be non-linear, therefore, these shortcuts are impossible.
While creation is strictly linear and serial, verification, however, can be parallelized: Someone who has received a large number of these PoH fragments between two well-defined locations, can verify the accuracy of each of those fragments in parallel to all other fragments. (The verification of a single fragment would still require to go through the hashes strictly in sequence, though.)
Dice rolling analogy
Probably the least helpful (and entertaining) of these PoH analogies is dice rolling: You can roll the next die only after the previous was rolled.
Outer space rubber stamp throwing
More fun (if still somewhat outlandish, even literally) is it to have our space-faring blockchain enthusiasts throw auto-homing rubber stamps:
The way the rubber of the stamp is carved determines the aerodynamics of the falling stamp. Imagine, for a second, that the carving would affect the trajectory in the atmosphere in such a way, that the same carving results in the same landing destination, independent of where the stamp was thrown from and how fast it was thrown. Because our fairy tale rubber stamp aerodynamics are so complicated, nobody can determine the landing location without actually throwing the stamp and waiting for it to land.
Now, the PoH rule is to throw the next stamp only after the first one has landed, carving the previous stamp’s exact location into the new stamp’s rubber. This results in a pre-determined pattern of landing locations to be recorded by the stamps. Even though it is pre-determined, it cannot be calculated without actually throwing the stamps and waiting for each of them to land.
Micro-Blockchain analogy
A third analogy would be to look at what each mining node produces as a blockchain, with slightly changed rules:
- There is no endless dice rolling to create a new block, i.e., every “micro-block” is valid (otherwise said, you always roll a winning one, or “every block counts”)
- Only micro-blocks that have any transactions in them will be output, together with information about the number of skipped empty blocks. (This is all the information that is required to reconstruct those blocks that were skipped.)
This can be seen in the image on the right: The hash chain (H) makes continuous progress. If any transaction T arrives, it is included in the hash calculation and the result R output and recorded, ready for inclusion into the next actual blockchain block (which we could also call “macro-block”).
#PeerToPeer #Blockchain #Bitcoin #ProofOfWork #ProofOfHistory #Analogy
chrony NTS certificate reload
The chrony
NTS daemon has no way to automatically reload its NTS certificate. A quick hack fixes this.
(This is part 5 in the NTS series. Did you already read part 1?)
In my setup, chrony
inherits the certificate from a web server, which displays information and status. The web server manages certificates automatically using Let’s Encrypt and has no way of restarting chrony
on certificate updates.
Restarting chronyd
is also (currently) the only way to have it reload its NTS certificate. (Other key material can be reloaded without restarting the daemon, but not certificates or private keys.)
#!/bin/shfind /etc/chrony/keys/ \ -name signed.crt \ -newer /run/chrony/chronyd.pid \ -exec systemctl restart chrony \;
What does it do?
- It checks whether in the directory
/etc/chrony/keys/
there is a filesigned.crt
(actually,find
will search anywhere below the directory). That is the certificate file signed by the Let’s Encrypt CA. - If that file is newer than
/run/chrony/chrony.pid
(the file containing the daemon’s process ID, written atchronyd
starting time), it restarts the daemon (systemctl restart chrony
). - Therefore, the restart action will take place whenever the certificate has been updated after the last launch of the daemon.
Depending on your system, you will have change the paths, file names and command.
You should save this as an executable file (chmod 755
) in /etc/cron.daily/
, e.g., as chrony-reread-certificate
.
GitHub - SteveLTN/https-portal: A fully automated HTTPS server powered by Nginx, Let's Encrypt and Docker.
A fully automated HTTPS server powered by Nginx, Let's Encrypt and Docker. - SteveLTN/https-portalGitHub
Transparent, Trustworthy Time with NTP and NTS
«Time is Money», as the old adage says. Who controls the time, controls all kinds of operations and businesses around the world. And therefore, controls the world. Today, we all take accurate time for granted. Even though, today, it is delivered over the Internet mostly unsecured. But this is easy to change.Switching from NTP to NTS is as easy and important as moving from HTTP to HTTPS.
(Part 1 in the NTS series.)Table of Contents
- What could possibly go wrong?
- Reliable time
- Installing NTS capable software
- Configuring Chrony or NTPsec
- Raspberry Pi caveats
- Public NTS-capable servers
- More servers
- History
What could possibly go wrong?
In the old times™, accuracy in the order of decades (“during the reign of King Herod”) or years (“a boy of seven summers”) might have been good enough. Today, hours and minutes are the minimum for many processes and in some cases, even milliseconds are too coarse.Anything from industrial control systems over (distributed) file systems to everything even remotely related to security depends on time and will break one way or another if time is wrong: HTTPS/TLS/X.509 certificates, PGP keys, DNSSEC, two-factor authentication, overwriting or expiring backups, and even vaccination certificates, to name just a few.
Reliable time
Different applications have different requirements on time: For some, such as rate measurement devices, a coherent pace may be important. For others, including certificate validity checks, the actual, absolute, time is key; they can live with uneven speeds of time advancing.There might have been a time where exploring each application’s detailed reliability requirements might have been necessary. However, today, for the vast majority of everyday applications, the answer is easy: Just use authenticated NTP, aka NTS!
Installing NTS capable software
First, you need an NTS capable software. It appears that NTPsec and Chrony are currently the only choices. In this example, we use Chrony, but feel free to use NTPsec instead.
- GNU/Linux: Almost all distributions include NTPsec or Chrony prepackaged in a way that installing them will replace the preinstalled time synchronization service. However, only NTPsec>=1.2.0 and Chrony>=4.0 include NTS support.
Debian 11 (bullseye; also buster-packports), Ubuntu 21.04 (hirsute), Arch Linux, Fedora 34 and newer are among the distributions including sufficiently new versions.- Other Unix-like systems: Check for packages or build instructions; most BSD systems include a Chrony package.
- MacOS: Install ChronyControl (or, install Chrony with Homebrew and disable
timed
fromlaunchd
configuration).- Windows: Apparently, there is no NTS software support available there.
Some example setups:
Debian/Ubuntu
apt install chrony # Uninstalls other NTP software
Fedora
Already installed, nothing to do!FreeBSD
pkg install chronyecho chronyd_enable=YES >> /etc/rc.conf.local
The configuration file will live in/usr/local/etc/chrony.conf
.MacOS GUI
Download and install ChronyControl, then select Action → Install chrony. The configuration can be edited from the Gears menu in the window.MacOS Command Line
Install Homebrew, then run the following commands:
brew install chronysudo launchctl disable system/com.apple.timedsudo tee << "EOF" /Library/LaunchDaemons/org.tuxfamily.chrony.plist<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>org.tuxfamily.chrony</string> <key>Nice</key> <integer>-10</integer> <key>ProgramArguments</key> <array> <string>/usr/local/sbin/chronyd</string> <string>-n</string> </array> <key>RunAtLoad</key> <true/> </dict></plist>EOFsudo launchctl load /Library/LaunchDaemons# Only after you create /etc/chrony.confsudo launchctl start system/org.tuxfamily.chrony
The configuration file will be/etc/chrony.conf
; the service will not start without it.Configuring Chrony or NTPsec
Then, choose 3-4 NTS servers from the list of public NTS servers, below and add them to the configuration file (e.g.,/etc/chrony/chrony.conf
or/etc/ntpsec/ntp.conf
), making sure that they have thents
flag set. For a configuration in Switzerland, the following lines might be added to the configuration file (syntax compatible with both Chrony and NTPsec):
server ntp.3eck.net iburst ntsserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst nts
If you use Chrony, you may want to consider adding theauthselectmode prefer
directive on a line of its own to disable non-NTS sources in the presence of NTS servers.After your server chimes reliably, you can also consider removing the unauthenticated (non-NTS) sources (
server
,peer
, andpool
directives).Raspberry Pi caveats
The popular Raspberry Pi computers lack a real-time clock, as do several other single-board or embedded computers, including most home routers. This means that on power-up, these machines do not have the faintest idea what time it is, not even the year or decade. This makes it impossible to verify the validity of TLS certificates, as is required for establishing a trustworthy HTTPS or NTS connection. (On those machines, you might have seen “Invalid certificate” warnings, especially if you only connect it to the network after you have already logged in.)This results in a chicken-and-egg problem: The machine does not know the time, therefore, it cannot get a reliable time, as it relies on NTS verifying the validity period of the certificate.
If you can use Chrony, the easiest and most secure way out is to add the following configuration line:
nocerttimecheck 1
This instructs Chrony to not check the certificate validity time until it has set the clock the first time. Short of adding an RTC chip or GNSS (“GPS”) receiver, this is the most secure option possible. It beats the other option of adding enough non-NTS servers to make (entirely unauthenticated!) initial time setting possible. (If the time is ever maliciously off, it will never revert back to the real time!)Public NTS-capable servers
Currently, the number of public servers with NTS support still seems very modest.Therefore, I maintain a list of public NTS servers. As of 2023-08-12, it has been moved to its own page: Public NTS Server List.
More servers
NTS is an important upgrade to NTP. Hopefully, the number of NTP servers with NTS authentication will grow quickly. This should not be too hard, as all the infrastructure (e.g., Let’s Encrypt), tools, and know-how which has been used to migrate HTTP to HTTPS in the recent years can be reused. Upgrading an existing NTP server to NTS is even simpler as upgrading a web servers, as there are no problems with HTTP redirects and mixed content. (Basically, it is as easy as pointing NTPsec or Chrony to the private key and certificate chains.)If (or when) you run a public server with NTS, please let me know, so I can add it to this list.
History
2021-12-28: Added/improved install instructions for various platforms; added some additional servers and outlined listing policy.2021-12-31: Added
ntp.br
servers.2022-01-08: Listed Linux distributions with NTS-capable software.
2022-01-10: Added
0xt.ca
servers by agreement.2022-01-11: Added
time.signorini.ch
by agreement.2022-02-03: Added
nts1.adopo.net
,[url=https://www.jabber-germany.de]www.jabber-germany.de[/url]
, and[url=https://www.masters-of-cloud.de]www.masters-of-cloud.de[/url]
by agreement.2022-06-04: Added
[url=https://www.ntppool.org/scores/3.220.42.39]virginia.time.system76.com[/url]
,[url=https://www.ntppool.org/scores/3.134.129.152]ohio.time.system76.com[/url]
, and[url=https://www.ntppool.org/scores/52.10.183.132]oregon.time.system76.com[/url]
by agreement.2022-08-02: Added
[url=https://ntpmon.dcs1.biz/gpsd.php]ntpmon.dcs1.biz[/url]
by agreement. Added ntp3.fau.de. Updated NL to production servers. Updated the paragraph below the table.2022-09-04: Removed most 0xt.ca servers as these will be shutting down at the end of this month. time.0xt.ca will remain.
2022-09-26: Added
[url=https://system76.com/time/]{paris,brazil}.time.system76.com[/url]
by agreement.2023-08-12: Moved list out of this post to the Public NTS Time Server List page.
Why time is important in today’s protocols
Assuring consistent state at multiple sites is very hard in the presence of timeouts and errors. Most solutions require complex protocols, multiple round-trip times and/or large amounts of communication or storage. Therefore, instead of trying to achieve perfect synchronization such as two-phase commit, often a “soft state” approach is chosen, where the validity of state (e.g., credentials) expires after some well-defined period. This ensures that inconsistencies will self-heal, putting the burden on reliable time.
#Security #Timestamps #NTP #NTS
Why ninety-day lifetimes for certificates?
We’re sometimes asked why we only offer certificates with ninety-day lifetimes. People who ask this are usually concerned that ninety days is too short and wish we would offer certificates lasting a year or more, like some other CAs do.letsencrypt.org
Debugging NTS problems
Debugging is hard, debugging security protocols doubly so. And there are not many tools and how-to’s available for NTS yet. So, here’s a (short) list of NTS problems I have seen and some tricks for debugging them.
(This is part 4 of the NTS series. Have you already read part 1?)
Table of Contents
The network, your enemy
NTP packet loss
Already for plain NTP, it is known that sometimes, especially across transatlantic links, NTP packets sometimes magically disappear. Some people suggest that some ISPs are rate-limiting NTP packets to protect from DDoS. So, some issues you might run into with NTS are older than (and independent of) NTS.
Medium-sized NTP packet loss
NTS packets are longer than plain NTP packets. NTP packets that have been used in amplification DDoS attacks several years ago are also longer than plain NTP packets. So some ISPs are aggressively filtering NTP packets longer than usual. Thus, depending on the ISPs your packets travel through, NTS packets might be very unreliable or even impossible to transmit. If it is your own ISP doing the filtering, you can try to talk to them and have them remove these filter entirely or at least for your servers. Most NTP servers running today no longer can be abused for amplification attacks and there are better ways than just blindly filtering port 123 traffic. Also, Chrony does not support the parts of the NTP protocol that could be used for amplification. At all.
Source ports
Probably also in a (misguided) attempt at preventing DDoS attacks, some ISPs block packets from privileged ports (<1024) to the NTP port (123). Maybe some also only block port-123-to-port-123 traffic.
This problem might manifest itself that Chrony clients can talk to your server without problems, but packets from NTPd and NTPsec clients are all dropped.
In Switzerland, Sunrise is currently known to block all incoming packets from privileged ports to port 123. This is the reason time.signorini.ch
is listed as “Chrony-only” in the public server list.
Certificate issues
Public online tests for HTTPS and several other protocols with TLS are well-known and easy to come by. However, if you think that NTS-KE could be the culprit, there seem to be no online tests available.
However, Miroslav Lichvar provided a helpful one-liner to mimic an NTS-KE client:
printf '\x80\x1\x0\x2\x0\x0\x80\x4\x0\x2\x0\xf\x80\x0\x0\x0' | \gnutls-cli -p 4460 --alpn=ntske/1 ntp.example.ch \--logfile=/dev/stderr | hexdump -C
For all NTS-KE implementations, you should see gnutls-cli
listing the certificate chain. This should include the entire TLS chain; i.e., if using Let’s Encrypt, it should include the following three certificates:
Processed 128 CA certificate(s).Resolving 'ntp.trifence.ch:4460'...Connecting to '109.202.196.249:4460'...- Certificate type: X.509- Got a certificate list of 3 certificates.- Certificate[0] info: - subject `CN=ntp.trifence.ch', issuer `CN=R3,O=Let's Encrypt,C=US', serial 0x0366cc5bebce8b22d46ed99773b5cb293e21, RSA key 2048 bits, signed using RSA-SHA256, activated `2021-12-19 15:03:57 UTC', expires `2022-03-19 15:03:56 UTC', pin-sha256="YZ3NzoJSQofn3qJGVGxwr+zqK3TwKUkGKECRMUXDqKQ="Public Key ID:sha1:d440356a08558e52c97f3bc9a5c459b3ddd437c3sha256:619dcdce82524287e7dea246546c70afecea2b74f02949062840913145c3a8a4Public Key PIN:pin-sha256:YZ3NzoJSQofn3qJGVGxwr+zqK3TwKUkGKECRMUXDqKQ=- Certificate[1] info: - subject `CN=R3,O=Let's Encrypt,C=US', issuer `CN=ISRG Root X1,O=Internet Security Research Group,C=US', serial 0x00912b084acf0c18a753f6d62e25a75f5a, RSA key 2048 bits, signed using RSA-SHA256, activated `2020-09-04 00:00:00 UTC', expires `2025-09-15 16:00:00 UTC', pin-sha256="jQJTbIh0grw0/1TkHSumWb+Fs0Ggogr621gT3PvPKG0="- Certificate[2] info: - subject `CN=ISRG Root X1,O=Internet Security Research Group,C=US', issuer `CN=DST Root CA X3,O=Digital Signature Trust Co.', serial 0x4001772137d4e942b8ee76aa3c640ab7, RSA key 4096 bits, signed using RSA-SHA256, activated `2021-01-20 19:14:03 UTC', expires `2024-09-30 18:14:03 UTC', pin-sha256="C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M="- Status: The certificate is trusted. - Description: (TLS1.3-X.509)-(ECDHE-SECP256R1)-(RSA-PSS-RSAE-SHA256)-(AES-256-GCM)- Session ID: 7B:C9:66:88:E8:5B:90:0D:52:6C:0C:55:2D:A2:7E:C9:85:0A:21:AB:85:B5:B4:B9:71:52:A3:FE:71:A9:26:CD- Options:- Application protocol: ntske/1- Handshake was completed
One thing you should expect to see is the line highlighted above, “The certificate is trusted.”
When testing against a Chrony server, you will also get some 50 lines of hex dump, detailing the NTS-KE response. NTPsec servers do not seem to like the hand-crafted NTS-KE request message (the printf
command above) and gnutls-cli
will just report:
*** Fatal error: Error in the pull function.*** Server has terminated the connection abnormally.
NTPsec as a client will also provide extensive logging with the NTSc
tag, which can also be helpful while diagnosing difficulties. On my system, these end up in /var/log/syslog
. On systemd
-based machines with Chrony, journalctl -xeu chrony
is your friend. Your mileage may vary, though.
Additional resources
NTP Filtering (Delay & Blockage) in the Internet
NTP (Network Time Protocol) messages are sometimes rate-limited or blocked entirely by Internet operators. This little-known “NTP filtering” was put into place several years ago in resp…Weberblog.net
Transparent, Trustworthy Time with NTP and NTS
«Time is Money», as the old adage says. Who controls the time, controls all kinds of operations and businesses around the world. And therefore, controls the world. Today, we all take accurate time for granted. Even though, today, it is delivered over the Internet mostly unsecured. But this is easy to change.Switching from NTP to NTS is as easy and important as moving from HTTP to HTTPS.
(Part 1 in the NTS series.)Table of Contents
- What could possibly go wrong?
- Reliable time
- Installing NTS capable software
- Configuring Chrony or NTPsec
- Raspberry Pi caveats
- Public NTS-capable servers
- More servers
- History
What could possibly go wrong?
In the old times™, accuracy in the order of decades (“during the reign of King Herod”) or years (“a boy of seven summers”) might have been good enough. Today, hours and minutes are the minimum for many processes and in some cases, even milliseconds are too coarse.Anything from industrial control systems over (distributed) file systems to everything even remotely related to security depends on time and will break one way or another if time is wrong: HTTPS/TLS/X.509 certificates, PGP keys, DNSSEC, two-factor authentication, overwriting or expiring backups, and even vaccination certificates, to name just a few.
Reliable time
Different applications have different requirements on time: For some, such as rate measurement devices, a coherent pace may be important. For others, including certificate validity checks, the actual, absolute, time is key; they can live with uneven speeds of time advancing.There might have been a time where exploring each application’s detailed reliability requirements might have been necessary. However, today, for the vast majority of everyday applications, the answer is easy: Just use authenticated NTP, aka NTS!
Installing NTS capable software
First, you need an NTS capable software. It appears that NTPsec and Chrony are currently the only choices. In this example, we use Chrony, but feel free to use NTPsec instead.
- GNU/Linux: Almost all distributions include NTPsec or Chrony prepackaged in a way that installing them will replace the preinstalled time synchronization service. However, only NTPsec>=1.2.0 and Chrony>=4.0 include NTS support.
Debian 11 (bullseye; also buster-packports), Ubuntu 21.04 (hirsute), Arch Linux, Fedora 34 and newer are among the distributions including sufficiently new versions.- Other Unix-like systems: Check for packages or build instructions; most BSD systems include a Chrony package.
- MacOS: Install ChronyControl (or, install Chrony with Homebrew and disable
timed
fromlaunchd
configuration).- Windows: Apparently, there is no NTS software support available there.
Some example setups:
Debian/Ubuntu
apt install chrony # Uninstalls other NTP software
Fedora
Already installed, nothing to do!FreeBSD
pkg install chronyecho chronyd_enable=YES >> /etc/rc.conf.local
The configuration file will live in/usr/local/etc/chrony.conf
.MacOS GUI
Download and install ChronyControl, then select Action → Install chrony. The configuration can be edited from the Gears menu in the window.MacOS Command Line
Install Homebrew, then run the following commands:
brew install chronysudo launchctl disable system/com.apple.timedsudo tee << "EOF" /Library/LaunchDaemons/org.tuxfamily.chrony.plist<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>org.tuxfamily.chrony</string> <key>Nice</key> <integer>-10</integer> <key>ProgramArguments</key> <array> <string>/usr/local/sbin/chronyd</string> <string>-n</string> </array> <key>RunAtLoad</key> <true/> </dict></plist>EOFsudo launchctl load /Library/LaunchDaemons# Only after you create /etc/chrony.confsudo launchctl start system/org.tuxfamily.chrony
The configuration file will be/etc/chrony.conf
; the service will not start without it.Configuring Chrony or NTPsec
Then, choose 3-4 NTS servers from the list of public NTS servers, below and add them to the configuration file (e.g.,/etc/chrony/chrony.conf
or/etc/ntpsec/ntp.conf
), making sure that they have thents
flag set. For a configuration in Switzerland, the following lines might be added to the configuration file (syntax compatible with both Chrony and NTPsec):
server ntp.3eck.net iburst ntsserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst nts
If you use Chrony, you may want to consider adding theauthselectmode prefer
directive on a line of its own to disable non-NTS sources in the presence of NTS servers.After your server chimes reliably, you can also consider removing the unauthenticated (non-NTS) sources (
server
,peer
, andpool
directives).Raspberry Pi caveats
The popular Raspberry Pi computers lack a real-time clock, as do several other single-board or embedded computers, including most home routers. This means that on power-up, these machines do not have the faintest idea what time it is, not even the year or decade. This makes it impossible to verify the validity of TLS certificates, as is required for establishing a trustworthy HTTPS or NTS connection. (On those machines, you might have seen “Invalid certificate” warnings, especially if you only connect it to the network after you have already logged in.)This results in a chicken-and-egg problem: The machine does not know the time, therefore, it cannot get a reliable time, as it relies on NTS verifying the validity period of the certificate.
If you can use Chrony, the easiest and most secure way out is to add the following configuration line:
nocerttimecheck 1
This instructs Chrony to not check the certificate validity time until it has set the clock the first time. Short of adding an RTC chip or GNSS (“GPS”) receiver, this is the most secure option possible. It beats the other option of adding enough non-NTS servers to make (entirely unauthenticated!) initial time setting possible. (If the time is ever maliciously off, it will never revert back to the real time!)Public NTS-capable servers
Currently, the number of public servers with NTS support still seems very modest.Therefore, I maintain a list of public NTS servers. As of 2023-08-12, it has been moved to its own page: Public NTS Server List.
More servers
NTS is an important upgrade to NTP. Hopefully, the number of NTP servers with NTS authentication will grow quickly. This should not be too hard, as all the infrastructure (e.g., Let’s Encrypt), tools, and know-how which has been used to migrate HTTP to HTTPS in the recent years can be reused. Upgrading an existing NTP server to NTS is even simpler as upgrading a web servers, as there are no problems with HTTP redirects and mixed content. (Basically, it is as easy as pointing NTPsec or Chrony to the private key and certificate chains.)If (or when) you run a public server with NTS, please let me know, so I can add it to this list.
History
2021-12-28: Added/improved install instructions for various platforms; added some additional servers and outlined listing policy.2021-12-31: Added
ntp.br
servers.2022-01-08: Listed Linux distributions with NTS-capable software.
2022-01-10: Added
0xt.ca
servers by agreement.2022-01-11: Added
time.signorini.ch
by agreement.2022-02-03: Added
nts1.adopo.net
,[url=https://www.jabber-germany.de]www.jabber-germany.de[/url]
, and[url=https://www.masters-of-cloud.de]www.masters-of-cloud.de[/url]
by agreement.2022-06-04: Added
[url=https://www.ntppool.org/scores/3.220.42.39]virginia.time.system76.com[/url]
,[url=https://www.ntppool.org/scores/3.134.129.152]ohio.time.system76.com[/url]
, and[url=https://www.ntppool.org/scores/52.10.183.132]oregon.time.system76.com[/url]
by agreement.2022-08-02: Added
[url=https://ntpmon.dcs1.biz/gpsd.php]ntpmon.dcs1.biz[/url]
by agreement. Added ntp3.fau.de. Updated NL to production servers. Updated the paragraph below the table.2022-09-04: Removed most 0xt.ca servers as these will be shutting down at the end of this month. time.0xt.ca will remain.
2022-09-26: Added
[url=https://system76.com/time/]{paris,brazil}.time.system76.com[/url]
by agreement.2023-08-12: Moved list out of this post to the Public NTS Time Server List page.
Why time is important in today’s protocols
Assuring consistent state at multiple sites is very hard in the presence of timeouts and errors. Most solutions require complex protocols, multiple round-trip times and/or large amounts of communication or storage. Therefore, instead of trying to achieve perfect synchronization such as two-phase commit, often a “soft state” approach is chosen, where the validity of state (e.g., credentials) expires after some well-defined period. This ensures that inconsistencies will self-heal, putting the burden on reliable time.
#Security #Timestamps #NTP #NTS
Why ninety-day lifetimes for certificates?
We’re sometimes asked why we only offer certificates with ninety-day lifetimes. People who ask this are usually concerned that ninety days is too short and wish we would offer certificates lasting a year or more, like some other CAs do.letsencrypt.org
NTS and dynamic IP addresses
Good news is that NTS relies on DNS names, no longer “naked” IP addresses. But what happens when the DNS name changes, pointing to a different IP address? A look at the protocol, the Chrony source, and the implications.
(This is part 3 in the NTS series. Did you already read part 1?)
Table of Contents
- The NTS protocol: RFC 8915
- Cookie refresh
- The Chrony source
- Being client to a dynamic NTS server
- How long for recovery?
- Operating a dynamic TLS server
- Additional addresses
The NTS protocol: RFC 8915
RFC8915 starts with a protocol overview in their Figure 1, republished here. From the address resolution perspective, the following happens:
- The NTS client obtains the IP address(es) of the NTS-KE (Key Establishment) service through a DNS lookup of the name specified in the
server
statement of the configuration file. - The client connects to the NTS-KE server over TCP, typically to port 4460 and performs a TLS handshake. The NTS-KE will return a list of 8 security cookies and the cryptographic parameters. (Optionally, the actual IP address or host name of the NTP server(s) can be returned, if an address other than the one of the NTS-KE server should be used.)
- The client then talks to this IP address, verifying the authenticity of the peer using those cookies.
So, the main differences (besides the security aspects) are:
- There is now always a DNS name (FQDN) involved, as this is also used to authenticate the TLS session. Some legacy NTP setups still preferred IP addresses, as this made NTP independent of DNS. (Setups without FQDN are theoretically possible, but at the cost of losing most security benefits of NTS.)
- The NTS-KE server could return a different IP address for the actual NTP handshake. It seems that this protocol feature is not currently used (and neither Chrony nor NTPsec currently support it).
So, in theory, NTS should work with NTS servers behind dynamic IP addresses.
Cookie refresh
After this initial, rather expensive, NTS-KE phase, the NTP protocol should continue almost as efficiently as before. Therefore, on each NTP exchange, one cookie is used and the cookie pool requested to be refilled in a single (longer) NTP message.
Normally, this means one cookie is used and it is directly refilled. However, if a request is not answered (packet lost, server down/unreachable, …), more than one cookie needs to be returned, to get back to 8 spares.
The Chrony source
When no cookies are available, either because the client just started up or because the last 8 NTP packets went unanswered, the get_cookies()
function is called to replenish them. This again starts with the DNS lookup as explained above. So, a DNS lookup is performed again, which hopefully returns the new address for the NTS-KE server and everyone gets happy again.
I presume that NTPsec behaves similarly; at least, that’s what is supposed to happen.
Being client to a dynamic NTS server
So, what happens, when a server you talk to changes IP address?
First, the NTP client will continue and try to talk to the server at the old IP address. However, there will be no response at all, or, if the new machine behind that address also talks NTP and/or NTS, the answers will not be correct, as the authentication of the NTP packets will not succeed.
After eight such attempts, all cookies will have been used up, causing a protocol restart to replenish the cookie jar.
How long for recovery?
With the default value of maxpoll 10
, after some initialization time, the server will be polled every 210=1024 seconds, resulting in the cookie storage being expended after around 8192 seconds or roughly 2¼ hours. If the DNS record has expired by then (which it should, if you expect the address to change periodically), the NTS-KE phase will succeed immediately and time synchronisation resume after roughly 136 minutes. If the local clock is running reasonably stable or other NTP/NTS sources are configured, this should go unnoticed by the rest of the system.
However, if the old DNS record has not been updated or is still cached, for example, after an unexpected change or downtime, further delays can be introduced. Here the parameters from the Chrony source:
- If the NTS-KE connection attempt fails, it is being retried with exponential backoff starting at 24=16 seconds, with a limit at 217=131072 seconds, roughly 1½ days.
- If the NTS-KE connection succeeds, but TLS fails, the same exponential backoff is initiated, but starting at 210=1024 seconds already.
Updated 2022-01-12: The standard says that clients SHOULD start with at least 10 seconds and increase by a factor of 1.5 after every failure, up to a maximum retry interval of 5 days.
So, if your NTS server changes its IP address without timely updating DNS, after about a day or so, clients will only retry connecting to your server every 1½ days, if the client is Chrony. The same happens when your server is misconfigured or serves an expired certificate. [Updated 2022-01-12:] Expect that some clients might even retry only every 5 days, so really try to avoid this situation.
Operating a dynamic TLS server
First of all, any NTP server should aim for stable IP addresses. Sometimes, however, you have no choice. And if the IP address changes only every blue moon or so, the 2 hour outage seen by clients will likely not be a problem.
But you should take care that your DNS record is updated in time, i.e., the latency before the update plus the DNS TTL should sum to at most 2 hours, less, if you have clients with smaller maxpoll
.
About 2 hours after the IP address change, the server will receive an NTS-KE request from each of its NTS clients. It should be prepared to handle this additional load.
Also, rate limits should take this into account. The Chrony rate limits are off by default and are per IP address. The default after enabling NTS rate limits is to only accept 1 NTS-KE request every 64 seconds. This will not be enough if you have many NTS clients behind a single NAT/CGNAT address, so you will have to increase the burst
value.
Independent of whether you operate a server behind a dynamic or static IP address, you should also take care that your server always responds to port 4460 queries and with the correct certificate. Otherwise, clients are quick to only retry NTS-KE connection every 1½ to 5 days.
Additional addresses
Something similar happens if you add additional addresses to your server, for example an IPv6 address, after you started IPv4 only. Unless your legacy address becomes unavailable for a few hours or clients restart, they will not pick up the additional address.
(Changing addresses/protocols often is not desirable, BTW, as this may lead to additional jitter. For example, the NTP Pool monitors see delays between the same pair of machines differ by 5 ms and more, just because of different routes or queueing configuration by the transit ISPs.) ntppool.org/a/trifence
Transparent, Trustworthy Time with NTP and NTS
«Time is Money», as the old adage says. Who controls the time, controls all kinds of operations and businesses around the world. And therefore, controls the world. Today, we all take accurate time for granted. Even though, today, it is delivered over the Internet mostly unsecured. But this is easy to change.Switching from NTP to NTS is as easy and important as moving from HTTP to HTTPS.
(Part 1 in the NTS series.)Table of Contents
- What could possibly go wrong?
- Reliable time
- Installing NTS capable software
- Configuring Chrony or NTPsec
- Raspberry Pi caveats
- Public NTS-capable servers
- More servers
- History
What could possibly go wrong?
In the old times™, accuracy in the order of decades (“during the reign of King Herod”) or years (“a boy of seven summers”) might have been good enough. Today, hours and minutes are the minimum for many processes and in some cases, even milliseconds are too coarse.Anything from industrial control systems over (distributed) file systems to everything even remotely related to security depends on time and will break one way or another if time is wrong: HTTPS/TLS/X.509 certificates, PGP keys, DNSSEC, two-factor authentication, overwriting or expiring backups, and even vaccination certificates, to name just a few.
Reliable time
Different applications have different requirements on time: For some, such as rate measurement devices, a coherent pace may be important. For others, including certificate validity checks, the actual, absolute, time is key; they can live with uneven speeds of time advancing.There might have been a time where exploring each application’s detailed reliability requirements might have been necessary. However, today, for the vast majority of everyday applications, the answer is easy: Just use authenticated NTP, aka NTS!
Installing NTS capable software
First, you need an NTS capable software. It appears that NTPsec and Chrony are currently the only choices. In this example, we use Chrony, but feel free to use NTPsec instead.
- GNU/Linux: Almost all distributions include NTPsec or Chrony prepackaged in a way that installing them will replace the preinstalled time synchronization service. However, only NTPsec>=1.2.0 and Chrony>=4.0 include NTS support.
Debian 11 (bullseye; also buster-packports), Ubuntu 21.04 (hirsute), Arch Linux, Fedora 34 and newer are among the distributions including sufficiently new versions.- Other Unix-like systems: Check for packages or build instructions; most BSD systems include a Chrony package.
- MacOS: Install ChronyControl (or, install Chrony with Homebrew and disable
timed
fromlaunchd
configuration).- Windows: Apparently, there is no NTS software support available there.
Some example setups:
Debian/Ubuntu
apt install chrony # Uninstalls other NTP software
Fedora
Already installed, nothing to do!FreeBSD
pkg install chronyecho chronyd_enable=YES >> /etc/rc.conf.local
The configuration file will live in/usr/local/etc/chrony.conf
.MacOS GUI
Download and install ChronyControl, then select Action → Install chrony. The configuration can be edited from the Gears menu in the window.MacOS Command Line
Install Homebrew, then run the following commands:
brew install chronysudo launchctl disable system/com.apple.timedsudo tee << "EOF" /Library/LaunchDaemons/org.tuxfamily.chrony.plist<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>org.tuxfamily.chrony</string> <key>Nice</key> <integer>-10</integer> <key>ProgramArguments</key> <array> <string>/usr/local/sbin/chronyd</string> <string>-n</string> </array> <key>RunAtLoad</key> <true/> </dict></plist>EOFsudo launchctl load /Library/LaunchDaemons# Only after you create /etc/chrony.confsudo launchctl start system/org.tuxfamily.chrony
The configuration file will be/etc/chrony.conf
; the service will not start without it.Configuring Chrony or NTPsec
Then, choose 3-4 NTS servers from the list of public NTS servers, below and add them to the configuration file (e.g.,/etc/chrony/chrony.conf
or/etc/ntpsec/ntp.conf
), making sure that they have thents
flag set. For a configuration in Switzerland, the following lines might be added to the configuration file (syntax compatible with both Chrony and NTPsec):
server ntp.3eck.net iburst ntsserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst nts
If you use Chrony, you may want to consider adding theauthselectmode prefer
directive on a line of its own to disable non-NTS sources in the presence of NTS servers.After your server chimes reliably, you can also consider removing the unauthenticated (non-NTS) sources (
server
,peer
, andpool
directives).Raspberry Pi caveats
The popular Raspberry Pi computers lack a real-time clock, as do several other single-board or embedded computers, including most home routers. This means that on power-up, these machines do not have the faintest idea what time it is, not even the year or decade. This makes it impossible to verify the validity of TLS certificates, as is required for establishing a trustworthy HTTPS or NTS connection. (On those machines, you might have seen “Invalid certificate” warnings, especially if you only connect it to the network after you have already logged in.)This results in a chicken-and-egg problem: The machine does not know the time, therefore, it cannot get a reliable time, as it relies on NTS verifying the validity period of the certificate.
If you can use Chrony, the easiest and most secure way out is to add the following configuration line:
nocerttimecheck 1
This instructs Chrony to not check the certificate validity time until it has set the clock the first time. Short of adding an RTC chip or GNSS (“GPS”) receiver, this is the most secure option possible. It beats the other option of adding enough non-NTS servers to make (entirely unauthenticated!) initial time setting possible. (If the time is ever maliciously off, it will never revert back to the real time!)Public NTS-capable servers
Currently, the number of public servers with NTS support still seems very modest.Therefore, I maintain a list of public NTS servers. As of 2023-08-12, it has been moved to its own page: Public NTS Server List.
More servers
NTS is an important upgrade to NTP. Hopefully, the number of NTP servers with NTS authentication will grow quickly. This should not be too hard, as all the infrastructure (e.g., Let’s Encrypt), tools, and know-how which has been used to migrate HTTP to HTTPS in the recent years can be reused. Upgrading an existing NTP server to NTS is even simpler as upgrading a web servers, as there are no problems with HTTP redirects and mixed content. (Basically, it is as easy as pointing NTPsec or Chrony to the private key and certificate chains.)If (or when) you run a public server with NTS, please let me know, so I can add it to this list.
History
2021-12-28: Added/improved install instructions for various platforms; added some additional servers and outlined listing policy.2021-12-31: Added
ntp.br
servers.2022-01-08: Listed Linux distributions with NTS-capable software.
2022-01-10: Added
0xt.ca
servers by agreement.2022-01-11: Added
time.signorini.ch
by agreement.2022-02-03: Added
nts1.adopo.net
,[url=https://www.jabber-germany.de]www.jabber-germany.de[/url]
, and[url=https://www.masters-of-cloud.de]www.masters-of-cloud.de[/url]
by agreement.2022-06-04: Added
[url=https://www.ntppool.org/scores/3.220.42.39]virginia.time.system76.com[/url]
,[url=https://www.ntppool.org/scores/3.134.129.152]ohio.time.system76.com[/url]
, and[url=https://www.ntppool.org/scores/52.10.183.132]oregon.time.system76.com[/url]
by agreement.2022-08-02: Added
[url=https://ntpmon.dcs1.biz/gpsd.php]ntpmon.dcs1.biz[/url]
by agreement. Added ntp3.fau.de. Updated NL to production servers. Updated the paragraph below the table.2022-09-04: Removed most 0xt.ca servers as these will be shutting down at the end of this month. time.0xt.ca will remain.
2022-09-26: Added
[url=https://system76.com/time/]{paris,brazil}.time.system76.com[/url]
by agreement.2023-08-12: Moved list out of this post to the Public NTS Time Server List page.
Why time is important in today’s protocols
Assuring consistent state at multiple sites is very hard in the presence of timeouts and errors. Most solutions require complex protocols, multiple round-trip times and/or large amounts of communication or storage. Therefore, instead of trying to achieve perfect synchronization such as two-phase commit, often a “soft state” approach is chosen, where the validity of state (e.g., credentials) expires after some well-defined period. This ensures that inconsistencies will self-heal, putting the burden on reliable time.
#Security #Timestamps #NTP #NTS
Why ninety-day lifetimes for certificates?
We’re sometimes asked why we only offer certificates with ninety-day lifetimes. People who ask this are usually concerned that ninety days is too short and wish we would offer certificates lasting a year or more, like some other CAs do.letsencrypt.org
Timestamping: Why should I care?
This entry is part 1 of 2 in the series Timestamping
Timestamping documents is the basis for many forms of trust and evidence. As such, it is a building block for contracts and agreements, and dispute resolution. A timestamped document essentially says,
- this document did exist at this point in time, and
- it has not been modified or tampered with since.
These are probably the two most important ingredients for any contract or other agreement. Without them, any agreement would be moot.
A timestamp alone, however, dos not say,
- who authored the document,
- whether the person requesting the timestamp did have the right to have this copy, and
- who agrees to the contents of this document.
However, some of them may be implied or added through other means in addition to the timestamp.
Creating a fake document ahead of time is very hard, as important information will typically be missing.
Table of Contents
- Who authored the document?
- Does the bearer of this document have the right to have a it?
- Who agrees to the contents?
- Summary
Who authored the document?
This is probably the hardest question to answer, ever, and has been asked many times, even before digital documents started appearing. Did Mark Twain actually write Huckleberry Finn? Who wrote which parts of the Bible? Who authored which historic document? Who made a particular discovery? Who did the research leading to it?
Often, this can only be known for sure if a trustworthy person actually observed the process in real time. Nevertheless, the first person to have shown to have a copy of that document has an advantage when it comes to authorship claims.
Therefore, timestamping is actually beneficial here as well.
Does the bearer of this document have the right to have a it?
This question is also hard to answer. But for a human, understanding the context and having access to all pertinent legal documents surrounding the copy in question, this may be a solvable case. These humans are often known as judges. This already shows, that this question can rarely be answered automatically, even less so by an automated computer program with only access to the copy itself.
So, there is little (beyond authorship attribution, see above) that a timestamp can do.
Who agrees to the contents?
This is probably the simplest of these three questions, as procedures have been established, if two or more parties would like to ensure that they all agree. The process is called signing, and these signatures then can be used as evidence. This applies to both hand-written signatures as well as digital signatures.
Agreement: Hand-written signatures
However, hand-written signatures are not bound to the contents of the document itself. So, it is possible to alter the document later or transfer the signature to another, fake, contract. If all parties get a scanof the signed contracts, timestamped, it can be assured that the scan has not been tampered with since.
As most forgeries are only created much later, timestamps are a great piece of evidence in this case.
Agreement: Digital signatures
One would think that digital signatures already contain everything that is needed:
- A tamper-evident connection between the contents and the signature
- A signature which cannot be transferred from one document to another
- A signature which cannot be created unless a secret is known (which is typically kept secret by the signer)
- A signature which can be verified by anyone having access to the signer’s public key
- The date and time at which this signature was made
The big difference between an “ordinary” digital signature and a “real” (single-purpose) timestamp is that the date and time of the digital signature is determined by the signer and may thus be tampered with.
Furthermore, ordinary keys and certificates used for digital signatures have (a) an expiration date and (b) can be revoked at any time for any reason. Both actions render the key invalid.
But, what should a verifier do, when it sees an invalid key? It has to consider the signature as invalid. If it does not do so, it could fall victim to a fake signature, as the correct use and storage of the key is no longer guaranteed.
I.e., anyone gaining access to such an invalid key could backdate a signature and it would have to be questionable whether the signature is genuine. Therefore, in addition to the ordinary signature, also a trusted timestamp is needed to be able to trust signatures in the future.
How this is achieved will be discussed in part 3 of this series.
Summary
Whatever you do, it is important to have your data and documents timestamped by a trustworthy service or person. So, you do care! Or, at least, you really should.
Series Navigation
A brief history of time(stamping) >>
A brief history of time(stamping)
This entry is part 2 of 2 in the series Timestamping
When I tell people about timestamping, they often react with, “Ah, yes, that Blockchain thing”. However, timestamping is as old as civilisation and has some interesting properties it gains from non-Blockchain applications. So let’s go back a few millenia first.
More on blockchain in this overview and in the simple, humorous, yet thorough article «Hitchhiker’s Guide to the Blockchain».
Timestamping through times
Timestamping is an important part of establishing trust.
So it should not come as a surprise, that the Romans already employed notaries public to testify the accuracy of an agreement and when it was made.
Throughout time, libraries have also been used as trusted authorities to keep record of since when they owned a particular book, implicitly timestamping their contents.
1556 marks the first documented use of registered mail. It documents when a particular piece of mail, oblivious to its content, was posted from sender to recipient.
Robert Hooke and Isaac Newton timestamped some results in 1660 and 1677, respectively, before publishing them, by communicating anagrams to independent third parties.
We all know the old detective stories where clandestine information was exchanged through classified ads in newspapers. Here, the newspaper can be used both as a proof that a particular piece of information existed before publication time and as a commitment or proof of posting.
Another crime-related use of newspaper timestamping was done by taking photographs of a abducted person holding today’s newspaper. This was used to prove that the photograph was not backdated.
Modern times
Computers and modern cryptography made opened entirely new avenues for timestamping. Starting in 1988, the invention of cryptograhpic commitment schemes allowed Alice to state that she knew (or had chosen) a certain fact without revealing it (aka commitment). Later, Alice could disclose this information to Bob, at the same time proving that it was what she had committed to.
In 1995, Matthew Richardson and his I.T. Consultancy Limited started the PGP Digital Timestamping Service, which still is operational and in use today. There, trust is achieved by publishing (and therefore committing to) the signatures the timestamping service creates. Also, they are sequentially numbered and then grouped and cryptographically linked in multiple hierarchies. Both mechanisms limit the operator’s backdating abilities, allowing users and verifiers to gain trust in the correct operation of the system.
In 2000, the LOCKSS system (Lots Of Copies Keep Stuff Safe) was started, which uses libraries as a large-scale distributed repository of electronic books, introducing independence as an additional trust measure.
Since 2001, [url=https://en.wikipedia.org/wiki/Git]git[/url]
and other distributed version control systems can be used to efficiently share cryptographically linked information with independent parties.
Series Navigation
<< Timestamping: Why should I care?
#Timestamps #History #Blockchain
Timestamping: Why should I care?
This entry is part 1 of 2 in the series Timestamping
Timestamping documents is the basis for many forms of trust and evidence. As such, it is a building block for contracts and agreements, and dispute resolution. A timestamped document essentially says,
- this document did exist at this point in time, and
- it has not been modified or tampered with since.
These are probably the two most important ingredients for any contract or other agreement. Without them, any agreement would be moot.
A timestamp alone, however, dos not say,
- who authored the document,
- whether the person requesting the timestamp did have the right to have this copy, and
- who agrees to the contents of this document.
However, some of them may be implied or added through other means in addition to the timestamp.
Creating a fake document ahead of time is very hard, as important information will typically be missing.Table of Contents
- Who authored the document?
- Does the bearer of this document have the right to have a it?
- Who agrees to the contents?
- Summary
Who authored the document?
This is probably the hardest question to answer, ever, and has been asked many times, even before digital documents started appearing. Did Mark Twain actually write Huckleberry Finn? Who wrote which parts of the Bible? Who authored which historic document? Who made a particular discovery? Who did the research leading to it?Often, this can only be known for sure if a trustworthy person actually observed the process in real time. Nevertheless, the first person to have shown to have a copy of that document has an advantage when it comes to authorship claims.
Therefore, timestamping is actually beneficial here as well.
Does the bearer of this document have the right to have a it?
This question is also hard to answer. But for a human, understanding the context and having access to all pertinent legal documents surrounding the copy in question, this may be a solvable case. These humans are often known as judges. This already shows, that this question can rarely be answered automatically, even less so by an automated computer program with only access to the copy itself.So, there is little (beyond authorship attribution, see above) that a timestamp can do.
Who agrees to the contents?
This is probably the simplest of these three questions, as procedures have been established, if two or more parties would like to ensure that they all agree. The process is called signing, and these signatures then can be used as evidence. This applies to both hand-written signatures as well as digital signatures.Agreement: Hand-written signatures
However, hand-written signatures are not bound to the contents of the document itself. So, it is possible to alter the document later or transfer the signature to another, fake, contract. If all parties get a scanof the signed contracts, timestamped, it can be assured that the scan has not been tampered with since.As most forgeries are only created much later, timestamps are a great piece of evidence in this case.
Agreement: Digital signatures
One would think that digital signatures already contain everything that is needed:
- A tamper-evident connection between the contents and the signature
- A signature which cannot be transferred from one document to another
- A signature which cannot be created unless a secret is known (which is typically kept secret by the signer)
- A signature which can be verified by anyone having access to the signer’s public key
- The date and time at which this signature was made
The big difference between an “ordinary” digital signature and a “real” (single-purpose) timestamp is that the date and time of the digital signature is determined by the signer and may thus be tampered with.
Furthermore, ordinary keys and certificates used for digital signatures have (a) an expiration date and (b) can be revoked at any time for any reason. Both actions render the key invalid.
But, what should a verifier do, when it sees an invalid key? It has to consider the signature as invalid. If it does not do so, it could fall victim to a fake signature, as the correct use and storage of the key is no longer guaranteed.
I.e., anyone gaining access to such an invalid key could backdate a signature and it would have to be questionable whether the signature is genuine. Therefore, in addition to the ordinary signature, also a trusted timestamp is needed to be able to trust signatures in the future.
How this is achieved will be discussed in part 3 of this series.Summary
Whatever you do, it is important to have your data and documents timestamped by a trustworthy service or person. So, you do care! Or, at least, you really should.
Series Navigation
A brief history of time(stamping) >>
X.509 User Certificate-based Two-Factor Authentication for Web Applications
Marcel Waldvogel, Thomas Zink: X.509 User Certificate-based Two-Factor Authentication for Web Applications. In: Müller, Paul; Neumair, Bernhard; Reiser, Helmut; Dreo Rodosek, Gabi (Ed.): 10. DFN-Forum Kommunikationstechnologien, 2017.
Abstract
An appealing property to researchers, educators, and students is the openness of the physical environment and IT infrastructure of their organizations. However, to the IT administration, this creates challenges way beyond those of a single-purpose business or administration. Especially the personally identifiable information or the power of the critical functions behind these logins, such as financial transactions or manipulating user accounts, require extra protection in the heterogeneous educational environment with single-sign-on. However, most web-based environments still lack a reasonable second-factor protection or at least the enforcement of it for privileged operations without hindering normal usage.
In this paper we introduce a novel and surprisingly simple yet extremely flexible way to implement two-factor authentication based on X.509 user certificates in web applications. Our solution requires only a few lines of code in web server configuration and none in the application source code for basic protection. Furthermore, since it is based on X.509 certificates, it can be easily combined with smartcards or USB cryptotokens to further enhance security.
BibTeX (Download)
@inproceedings{Waldvogel-X509,title = {X.509 User Certificate-based Two-Factor Authentication for Web Applications},author = {Marcel Waldvogel and Thomas Zink},editor = {Paul Müller and Bernhard Neumair and Helmut Reiser and Dreo Rodosek, Gabi},url = {netfuture.ch/wp-content/upload… = {2017},date = {2017-05-30},urldate = {1000-01-01},booktitle = {10. DFN-Forum Kommunikationstechnologien},abstract = {An appealing property to researchers, educators, and students is the openness of the physical environment and IT infrastructure of their organizations. However, to the IT administration, this creates challenges way beyond those of a single-purpose business or administration. Especially the personally identifiable information or the power of the critical functions behind these logins, such as financial transactions or manipulating user accounts, require extra protection in the heterogeneous educational environment with single-sign-on. However, most web-based environments still lack a reasonable second-factor protection or at least the enforcement of it for privileged operations without hindering normal usage.In this paper we introduce a novel and surprisingly simple yet extremely flexible way to implement two-factor authentication based on X.509 user certificates in web applications. Our solution requires only a few lines of code in web server configuration and none in the application source code for basic protection. Furthermore, since it is based on X.509 certificates, it can be easily combined with smartcards or USB cryptotokens to further enhance security.},keywords = {Federated Services, Identity Management, Passwords, Security, Usability, Web Applications, X.509},pubstate = {published},tppubtype = {inproceedings}}
#Security #IdentityManagement #FederatedServices #Passwords #WebApplications #Usability
Transparent, Trustworthy Time with NTP and NTS
«Time is Money», as the old adage says. Who controls the time, controls all kinds of operations and businesses around the world. And therefore, controls the world. Today, we all take accurate time for granted. Even though, today, it is delivered over the Internet mostly unsecured. But this is easy to change.
Switching from NTP to NTS is as easy and important as moving from HTTP to HTTPS.
(Part 1 in the NTS series.)
Table of Contents
- What could possibly go wrong?
- Reliable time
- Installing NTS capable software
- Configuring Chrony or NTPsec
- Raspberry Pi caveats
- Public NTS-capable servers
- More servers
- History
What could possibly go wrong?
In the old times™, accuracy in the order of decades (“during the reign of King Herod”) or years (“a boy of seven summers”) might have been good enough. Today, hours and minutes are the minimum for many processes and in some cases, even milliseconds are too coarse.
Anything from industrial control systems over (distributed) file systems to everything even remotely related to security depends on time and will break one way or another if time is wrong: HTTPS/TLS/X.509 certificates, PGP keys, DNSSEC, two-factor authentication, overwriting or expiring backups, and even vaccination certificates, to name just a few.
Reliable time
Different applications have different requirements on time: For some, such as rate measurement devices, a coherent pace may be important. For others, including certificate validity checks, the actual, absolute, time is key; they can live with uneven speeds of time advancing.
There might have been a time where exploring each application’s detailed reliability requirements might have been necessary. However, today, for the vast majority of everyday applications, the answer is easy: Just use authenticated NTP, aka NTS!
Installing NTS capable software
First, you need an NTS capable software. It appears that NTPsec and Chrony are currently the only choices. In this example, we use Chrony, but feel free to use NTPsec instead.
- GNU/Linux: Almost all distributions include NTPsec or Chrony prepackaged in a way that installing them will replace the preinstalled time synchronization service. However, only NTPsec>=1.2.0 and Chrony>=4.0 include NTS support.
Debian 11 (bullseye; also buster-packports), Ubuntu 21.04 (hirsute), Arch Linux, Fedora 34 and newer are among the distributions including sufficiently new versions. - Other Unix-like systems: Check for packages or build instructions; most BSD systems include a Chrony package.
- MacOS: Install ChronyControl (or, install Chrony with Homebrew and disable
timed
fromlaunchd
configuration). - Windows: Apparently, there is no NTS software support available there.
Some example setups:
Debian/Ubuntu
apt install chrony # Uninstalls other NTP software
Fedora
Already installed, nothing to do!
FreeBSD
pkg install chronyecho chronyd_enable=YES >> /etc/rc.conf.local
The configuration file will live in /usr/local/etc/chrony.conf
.
MacOS GUI
Download and install ChronyControl, then select Action → Install chrony. The configuration can be edited from the Gears menu in the window.
MacOS Command Line
Install Homebrew, then run the following commands:
brew install chronysudo launchctl disable system/com.apple.timedsudo tee << "EOF" /Library/LaunchDaemons/org.tuxfamily.chrony.plist<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>org.tuxfamily.chrony</string> <key>Nice</key> <integer>-10</integer> <key>ProgramArguments</key> <array> <string>/usr/local/sbin/chronyd</string> <string>-n</string> </array> <key>RunAtLoad</key> <true/> </dict></plist>EOFsudo launchctl load /Library/LaunchDaemons# Only after you create /etc/chrony.confsudo launchctl start system/org.tuxfamily.chrony
The configuration file will be /etc/chrony.conf
; the service will not start without it.
Configuring Chrony or NTPsec
Then, choose 3-4 NTS servers from the list of public NTS servers, below and add them to the configuration file (e.g., /etc/chrony/chrony.conf
or /etc/ntpsec/ntp.conf
), making sure that they have the nts
flag set. For a configuration in Switzerland, the following lines might be added to the configuration file (syntax compatible with both Chrony and NTPsec):
server ntp.3eck.net iburst ntsserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst nts
If you use Chrony, you may want to consider adding the authselectmode prefer
directive on a line of its own to disable non-NTS sources in the presence of NTS servers.
After your server chimes reliably, you can also consider removing the unauthenticated (non-NTS) sources (server
, peer
, and pool
directives).
Raspberry Pi caveats
The popular Raspberry Pi computers lack a real-time clock, as do several other single-board or embedded computers, including most home routers. This means that on power-up, these machines do not have the faintest idea what time it is, not even the year or decade. This makes it impossible to verify the validity of TLS certificates, as is required for establishing a trustworthy HTTPS or NTS connection. (On those machines, you might have seen “Invalid certificate” warnings, especially if you only connect it to the network after you have already logged in.)
This results in a chicken-and-egg problem: The machine does not know the time, therefore, it cannot get a reliable time, as it relies on NTS verifying the validity period of the certificate.
If you can use Chrony, the easiest and most secure way out is to add the following configuration line:
nocerttimecheck 1
This instructs Chrony to not check the certificate validity time until it has set the clock the first time. Short of adding an RTC chip or GNSS (“GPS”) receiver, this is the most secure option possible. It beats the other option of adding enough non-NTS servers to make (entirely unauthenticated!) initial time setting possible. (If the time is ever maliciously off, it will never revert back to the real time!)
Public NTS-capable servers
Currently, the number of public servers with NTS support still seems very modest.
Therefore, I maintain a list of public NTS servers. As of 2023-08-12, it has been moved to its own page: Public NTS Server List.
More servers
NTS is an important upgrade to NTP. Hopefully, the number of NTP servers with NTS authentication will grow quickly. This should not be too hard, as all the infrastructure (e.g., Let’s Encrypt), tools, and know-how which has been used to migrate HTTP to HTTPS in the recent years can be reused. Upgrading an existing NTP server to NTS is even simpler as upgrading a web servers, as there are no problems with HTTP redirects and mixed content. (Basically, it is as easy as pointing NTPsec or Chrony to the private key and certificate chains.)
If (or when) you run a public server with NTS, please let me know, so I can add it to this list.
History
2021-12-28: Added/improved install instructions for various platforms; added some additional servers and outlined listing policy.
2021-12-31: Added ntp.br
servers.
2022-01-08: Listed Linux distributions with NTS-capable software.
2022-01-10: Added 0xt.ca
servers by agreement.
2022-01-11: Added time.signorini.ch
by agreement.
2022-02-03: Added nts1.adopo.net
, [url=https://www.jabber-germany.de]www.jabber-germany.de[/url]
, and [url=https://www.masters-of-cloud.de]www.masters-of-cloud.de[/url]
by agreement.
2022-06-04: Added [url=https://www.ntppool.org/scores/3.220.42.39]virginia.time.system76.com[/url]
, [url=https://www.ntppool.org/scores/3.134.129.152]ohio.time.system76.com[/url]
, and [url=https://www.ntppool.org/scores/52.10.183.132]oregon.time.system76.com[/url]
by agreement.
2022-08-02: Added [url=https://ntpmon.dcs1.biz/gpsd.php]ntpmon.dcs1.biz[/url]
by agreement. Added ntp3.fau.de. Updated NL to production servers. Updated the paragraph below the table.
2022-09-04: Removed most 0xt.ca servers as these will be shutting down at the end of this month. time.0xt.ca will remain.
2022-09-26: Added [url=https://system76.com/time/]{paris,brazil}.time.system76.com[/url]
by agreement.
2023-08-12: Moved list out of this post to the Public NTS Time Server List page.
Why time is important in today’s protocols
Assuring consistent state at multiple sites is very hard in the presence of timeouts and errors. Most solutions require complex protocols, multiple round-trip times and/or large amounts of communication or storage. Therefore, instead of trying to achieve perfect synchronization such as two-phase commit, often a “soft state” approach is chosen, where the validity of state (e.g., credentials) expires after some well-defined period. This ensures that inconsistencies will self-heal, putting the burden on reliable time.
#Security #Timestamps #NTP #NTS
Why ninety-day lifetimes for certificates?
We’re sometimes asked why we only offer certificates with ninety-day lifetimes. People who ask this are usually concerned that ninety days is too short and wish we would offer certificates lasting a year or more, like some other CAs do.letsencrypt.org
X.509 User Certificate-based Two-Factor Authentication for Web Applications
Marcel Waldvogel, Thomas Zink: X.509 User Certificate-based Two-Factor Authentication for Web Applications. In: Müller, Paul; Neumair, Bernhard; Reiser, Helmut; Dreo Rodosek, Gabi (Ed.): 10. DFN-Forum Kommunikationstechnologien, 2017.
Abstract
An appealing property to researchers, educators, and students is the openness of the physical environment and IT infrastructure of their organizations. However, to the IT administration, this creates challenges way beyond those of a single-purpose business or administration. Especially the personally identifiable information or the power of the critical functions behind these logins, such as financial transactions or manipulating user accounts, require extra protection in the heterogeneous educational environment with single-sign-on. However, most web-based environments still lack a reasonable second-factor protection or at least the enforcement of it for privileged operations without hindering normal usage.In this paper we introduce a novel and surprisingly simple yet extremely flexible way to implement two-factor authentication based on X.509 user certificates in web applications. Our solution requires only a few lines of code in web server configuration and none in the application source code for basic protection. Furthermore, since it is based on X.509 certificates, it can be easily combined with smartcards or USB cryptotokens to further enhance security.
BibTeX (Download)
@inproceedings{Waldvogel-X509,title = {X.509 User Certificate-based Two-Factor Authentication for Web Applications},author = {Marcel Waldvogel and Thomas Zink},editor = {Paul Müller and Bernhard Neumair and Helmut Reiser and Dreo Rodosek, Gabi},url = {netfuture.ch/wp-content/upload… = {2017},date = {2017-05-30},urldate = {1000-01-01},booktitle = {10. DFN-Forum Kommunikationstechnologien},abstract = {An appealing property to researchers, educators, and students is the openness of the physical environment and IT infrastructure of their organizations. However, to the IT administration, this creates challenges way beyond those of a single-purpose business or administration. Especially the personally identifiable information or the power of the critical functions behind these logins, such as financial transactions or manipulating user accounts, require extra protection in the heterogeneous educational environment with single-sign-on. However, most web-based environments still lack a reasonable second-factor protection or at least the enforcement of it for privileged operations without hindering normal usage.In this paper we introduce a novel and surprisingly simple yet extremely flexible way to implement two-factor authentication based on X.509 user certificates in web applications. Our solution requires only a few lines of code in web server configuration and none in the application source code for basic protection. Furthermore, since it is based on X.509 certificates, it can be easily combined with smartcards or USB cryptotokens to further enhance security.},keywords = {Federated Services, Identity Management, Passwords, Security, Usability, Web Applications, X.509},pubstate = {published},tppubtype = {inproceedings}}
#Security #IdentityManagement #FederatedServices #Passwords #WebApplications #Usability
Network Time Security: NTS articles overview
NTP, the Network Time Protocol, is the way most computers and mobile devices obtain their time through. NTS (Network Time Security) is to NTP what HTTPS is to HTTP. It also is as easy to upgrade as upgrading to HTTPS is these days: No effort for the client, just adding a certificate for the server. More details in the following articles:
NTS Series
- Transparent, Trustworthy Time with NTP and NTS: A motivation and introduction. Also refers to a list of official, semi-official and community NTS servers.
- Configuring an NTS-capable NTP server: What to do and why.
- NTS and dynamic IP addresses: What to take care of when running an NTS server behind a dynamic IP address.
- Debugging NTS problems: Know-how and tools for detecting and fixing strange behavior.
- Automatically restarting chronyc after a certificate change.
- Public NTS server list [Added 2023-08-12]
Our public NTS+NTP servers
ntp.trifence.ch
[url=https://ntp.zeitgitter.net]ntp.zeitgitter.net[/url]
- Other servers
The above links point to information and statistics of the servers. Independent statistics are also provided by the NTP Pool project (production servers, closer test servers).
More time servers needed!
NTS is a relatively young protocol, with only a small selection of NTS-capable time servers currently available. If you are interested in reliable time, please consider upgrading your existing NTP time server to also support NTS. It would be great if official timekeeping sites, universities, and companies around the world would join, in addition to volunteers.
Configuring an NTS-capable NTP server
The choice of Network Time Protocol (NTP) servers supporting NTS is still very limited. Here is some advice to get it to run smooth and trustworty.Netfuture: The future is networked
Transparent, Trustworthy Time with NTP and NTS
«Time is Money», as the old adage says. Who controls the time, controls all kinds of operations and businesses around the world. And therefore, controls the world. Today, we all take accurate time for granted. Even though, today, it is delivered over the Internet mostly unsecured. But this is easy to change.Switching from NTP to NTS is as easy and important as moving from HTTP to HTTPS.
(Part 1 in the NTS series.)Table of Contents
- What could possibly go wrong?
- Reliable time
- Installing NTS capable software
- Configuring Chrony or NTPsec
- Raspberry Pi caveats
- Public NTS-capable servers
- More servers
- History
What could possibly go wrong?
In the old times™, accuracy in the order of decades (“during the reign of King Herod”) or years (“a boy of seven summers”) might have been good enough. Today, hours and minutes are the minimum for many processes and in some cases, even milliseconds are too coarse.Anything from industrial control systems over (distributed) file systems to everything even remotely related to security depends on time and will break one way or another if time is wrong: HTTPS/TLS/X.509 certificates, PGP keys, DNSSEC, two-factor authentication, overwriting or expiring backups, and even vaccination certificates, to name just a few.
Reliable time
Different applications have different requirements on time: For some, such as rate measurement devices, a coherent pace may be important. For others, including certificate validity checks, the actual, absolute, time is key; they can live with uneven speeds of time advancing.There might have been a time where exploring each application’s detailed reliability requirements might have been necessary. However, today, for the vast majority of everyday applications, the answer is easy: Just use authenticated NTP, aka NTS!
Installing NTS capable software
First, you need an NTS capable software. It appears that NTPsec and Chrony are currently the only choices. In this example, we use Chrony, but feel free to use NTPsec instead.
- GNU/Linux: Almost all distributions include NTPsec or Chrony prepackaged in a way that installing them will replace the preinstalled time synchronization service. However, only NTPsec>=1.2.0 and Chrony>=4.0 include NTS support.
Debian 11 (bullseye; also buster-packports), Ubuntu 21.04 (hirsute), Arch Linux, Fedora 34 and newer are among the distributions including sufficiently new versions.- Other Unix-like systems: Check for packages or build instructions; most BSD systems include a Chrony package.
- MacOS: Install ChronyControl (or, install Chrony with Homebrew and disable
timed
fromlaunchd
configuration).- Windows: Apparently, there is no NTS software support available there.
Some example setups:
Debian/Ubuntu
apt install chrony # Uninstalls other NTP software
Fedora
Already installed, nothing to do!FreeBSD
pkg install chronyecho chronyd_enable=YES >> /etc/rc.conf.local
The configuration file will live in/usr/local/etc/chrony.conf
.MacOS GUI
Download and install ChronyControl, then select Action → Install chrony. The configuration can be edited from the Gears menu in the window.MacOS Command Line
Install Homebrew, then run the following commands:
brew install chronysudo launchctl disable system/com.apple.timedsudo tee << "EOF" /Library/LaunchDaemons/org.tuxfamily.chrony.plist<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>org.tuxfamily.chrony</string> <key>Nice</key> <integer>-10</integer> <key>ProgramArguments</key> <array> <string>/usr/local/sbin/chronyd</string> <string>-n</string> </array> <key>RunAtLoad</key> <true/> </dict></plist>EOFsudo launchctl load /Library/LaunchDaemons# Only after you create /etc/chrony.confsudo launchctl start system/org.tuxfamily.chrony
The configuration file will be/etc/chrony.conf
; the service will not start without it.Configuring Chrony or NTPsec
Then, choose 3-4 NTS servers from the list of public NTS servers, below and add them to the configuration file (e.g.,/etc/chrony/chrony.conf
or/etc/ntpsec/ntp.conf
), making sure that they have thents
flag set. For a configuration in Switzerland, the following lines might be added to the configuration file (syntax compatible with both Chrony and NTPsec):
server ntp.3eck.net iburst ntsserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst nts
If you use Chrony, you may want to consider adding theauthselectmode prefer
directive on a line of its own to disable non-NTS sources in the presence of NTS servers.After your server chimes reliably, you can also consider removing the unauthenticated (non-NTS) sources (
server
,peer
, andpool
directives).Raspberry Pi caveats
The popular Raspberry Pi computers lack a real-time clock, as do several other single-board or embedded computers, including most home routers. This means that on power-up, these machines do not have the faintest idea what time it is, not even the year or decade. This makes it impossible to verify the validity of TLS certificates, as is required for establishing a trustworthy HTTPS or NTS connection. (On those machines, you might have seen “Invalid certificate” warnings, especially if you only connect it to the network after you have already logged in.)This results in a chicken-and-egg problem: The machine does not know the time, therefore, it cannot get a reliable time, as it relies on NTS verifying the validity period of the certificate.
If you can use Chrony, the easiest and most secure way out is to add the following configuration line:
nocerttimecheck 1
This instructs Chrony to not check the certificate validity time until it has set the clock the first time. Short of adding an RTC chip or GNSS (“GPS”) receiver, this is the most secure option possible. It beats the other option of adding enough non-NTS servers to make (entirely unauthenticated!) initial time setting possible. (If the time is ever maliciously off, it will never revert back to the real time!)Public NTS-capable servers
Currently, the number of public servers with NTS support still seems very modest.Therefore, I maintain a list of public NTS servers. As of 2023-08-12, it has been moved to its own page: Public NTS Server List.
More servers
NTS is an important upgrade to NTP. Hopefully, the number of NTP servers with NTS authentication will grow quickly. This should not be too hard, as all the infrastructure (e.g., Let’s Encrypt), tools, and know-how which has been used to migrate HTTP to HTTPS in the recent years can be reused. Upgrading an existing NTP server to NTS is even simpler as upgrading a web servers, as there are no problems with HTTP redirects and mixed content. (Basically, it is as easy as pointing NTPsec or Chrony to the private key and certificate chains.)If (or when) you run a public server with NTS, please let me know, so I can add it to this list.
History
2021-12-28: Added/improved install instructions for various platforms; added some additional servers and outlined listing policy.2021-12-31: Added
ntp.br
servers.2022-01-08: Listed Linux distributions with NTS-capable software.
2022-01-10: Added
0xt.ca
servers by agreement.2022-01-11: Added
time.signorini.ch
by agreement.2022-02-03: Added
nts1.adopo.net
,[url=https://www.jabber-germany.de]www.jabber-germany.de[/url]
, and[url=https://www.masters-of-cloud.de]www.masters-of-cloud.de[/url]
by agreement.2022-06-04: Added
[url=https://www.ntppool.org/scores/3.220.42.39]virginia.time.system76.com[/url]
,[url=https://www.ntppool.org/scores/3.134.129.152]ohio.time.system76.com[/url]
, and[url=https://www.ntppool.org/scores/52.10.183.132]oregon.time.system76.com[/url]
by agreement.2022-08-02: Added
[url=https://ntpmon.dcs1.biz/gpsd.php]ntpmon.dcs1.biz[/url]
by agreement. Added ntp3.fau.de. Updated NL to production servers. Updated the paragraph below the table.2022-09-04: Removed most 0xt.ca servers as these will be shutting down at the end of this month. time.0xt.ca will remain.
2022-09-26: Added
[url=https://system76.com/time/]{paris,brazil}.time.system76.com[/url]
by agreement.2023-08-12: Moved list out of this post to the Public NTS Time Server List page.
Why time is important in today’s protocols
Assuring consistent state at multiple sites is very hard in the presence of timeouts and errors. Most solutions require complex protocols, multiple round-trip times and/or large amounts of communication or storage. Therefore, instead of trying to achieve perfect synchronization such as two-phase commit, often a “soft state” approach is chosen, where the validity of state (e.g., credentials) expires after some well-defined period. This ensures that inconsistencies will self-heal, putting the burden on reliable time.
#Security #Timestamps #NTP #NTS
Why ninety-day lifetimes for certificates?
We’re sometimes asked why we only offer certificates with ninety-day lifetimes. People who ask this are usually concerned that ninety days is too short and wish we would offer certificates lasting a year or more, like some other CAs do.letsencrypt.org
Configuring an NTS-capable NTP server
The choice of Network Time Protocol (NTP) servers supporting NTS is still very limited. Here is some advice to get it to run smooth and trustworty.
(This is Part 2 in the NTS series.)
Table of Contents
- Actual NTP and NTS goals
- How not to achieve them
- Identifying criteria
- More servers!
- How to create a trustworthy server?
- Making your server reliable
- Upstream server choice
- Stratum-2 configuration example
- Stratum-1 configuration example
- Improving Monitoring
Actual NTP and NTS goals
The main user-visible goal of NTP is to receive accurate and stable time. It tries hard to identify clocks whose time looks wrong (“falsetickers”). However, in many setups, having one or two falsetickers as time sources may already cause your system’s time to be off, maybe even by an arbitrary amount. If an attacker can drop, inject, delay, or modify network packets, he is in control of your server’s time and thus in control of many processes, including security and safety related tasks.
NTS tries to prevent this. The foremost goals of the NTS protocol are identity and authentication. What you actually want is reliability and trust. As they are virtually impossible to ascertain by an automated protocol, identity and authentication are the closest matches and essential pillars to build Reliability and Trust.
How not to achieve them
With servers supporting NTS being as few and far between as they currently are, people are eagerly holding on to any NTS straws.
However, picking some random hosts from the Internet, just because they talk NTS, already misses the original premise of identity and authentication and has no chance of ever achieving reliability or trust. Using servers labeled as “test” may achieve the first two goals, but their maintainers do not want to guarantee the latter two.
Identifying criteria
NTS servers out there can be grouped into four categories:
- Officially sanctioned servers. Servers with an official or quasi-official duty to provide public, accurate and reliable time. They currently include NetNod/NTP.SE, PTB, and NIC/NTP.BR. (Although labeled as “pilot”, SIDN/Time.NL probably should currently be put into this category as well.)
- Corporations and organizations. Well-known organizations which as part of their internal and external services provide the time. Right now, Cloudflare seems the only one on this list.
- Community efforts. Individuals or communities trying to fill a hole. This is what we did in Switzerland, so far resulting in ntp.3eck.net, ntp.zeitgitter.net, and ntp.trifence.ch.
- Internal, development, test, random, and forgotten servers. The remainder, with no promises to their correctness or availability, neither implicit or explicit. Sometimes, not even the basic properties of identity and authentication are met.
This list shows a trend: If the top entries suddenly stop providing the service at a reasonable quality, a public outcry is to be expected: Among the taxpayers of a country, among the customers of the corporation, or among the supporters of the organization. We can expect them to go the extra mile to fix things quickly if anything should break unexpectedly, as they do not want to lose their reputation and the trust they have earned so far.
Community efforts are slightly weaker: There is less reputation to lose, but we can assume an intrinsic motivation to provide the service. So, there is at least an expectation to go the extra meter to keep things running smooth, if not the extra mile.
For the last category, however, there is a good chance that anyone will so much as blush when the service acts up. On the contrary, you might hear, “how the hell did you even find me?” or “I told you so!” as an answer.
So, unless you have additional information, only consider servers from the first two or three categories for use in your production NTS setup.
More servers!
To alleviate the NTS scarcity, there is only one option: More servers in the first two or three categories:
- Talk to official bodies to upgrade their existing NTP service to NTS.
- Convince corporations and organizations (including universities) to start offering NTP service or upgrade to NTS. If you have a business relationship with them, consider opening a ticket.
- Start your own community effort.
The first two are slow-moving and can take months or years, even if you have the right connections. But the third is where everyone can contribute, so let’s focus on that one in the remainder of this post.
How to create a trustworthy server?
This checklist provides a starting point for a potential user of your service. If this information is clearly specified, evaluating trust, reliability, identity, and authenticity of the NTS service becomes a breeze.
- Who operates the service?
- Is that person or organization interested in keeping the service up and running? (Personal commitment or afraid of losing the face)
- What is the access policy (may just read “public access”)?
- What is the current accuracy of the provided time?
- Is there a history of availability/accuracy? (Or is there enough organizational trust, that this is implied?)
So, if you want to provide a service which looks trustworthy, put that up on a web page. I would suggest to run at least some of the information on the NTS host itself: That machine already has a TLS certificate and public IP address, so running a web server with a static page should be trivial.
The easiest way of showing current and recent reachability and accuracy is to link to the NTP pool’s status or profile page. If you operate a public NTS server, why not also make it available to the NTP pool?
Adding a host to the NTP pool for monitoring only.
And even if you do not want to make a server publicly available (e.g., because the machine and/or its link is too weak), you can register it in the pool and select “Monitoring only” from the connection speed menu, as long as you do not overuse this. Note the “Not active in the pool, monitoring only” comment in the screenshot (highlighting mine).
In addition, you may also run something like ntpviz to provide more detail. Providing transparency always makes it easier for someone else to trust you.
Making your server reliable
Adding your server to the NTP Pool has another benefit: You will be sent an email, if your server’s quality (reachability, accuracy) drops significantly and it would become too bad to be included in the pool. Of course, you may always add your own monitoring, especially, if you want to become aware even of smaller quality drops or want to receive warning earlier.
Monitoring only checks the outputs of your server; it is your duty to make sure you select inputs with high enough quality: Stable power (consider a USB power bank for a Raspberry Pi), stable Internet connection (also think power and cables) and good antenna reception (if you have a GNSS/GPS input). Last, but definitely not least: A good set of upstream servers.
Upstream server choice
For a high-quality NTP server, you may want to have at least five good sources at any time. Also, the vast majority of your sources should be good NTS sources (more than ⅔). Otherwise, you might just end up receiving manipulated unauthenticated input, which you will upgrade to authenticated, and output garbage (GIGO). But now, this garbage will be in your name, with your reputation behind it.
In an ideal world (hopefully already next year), there will be enough NTS sources available to make a good selection from those. But now, you have a trade off between accuracy, round-trip time, or NTS capability of your sources, and number of sources in total.
These Anycast responses are off by ~2 ms from what the next-door server receives from the same IP address.
I ended up with 5…7 stable NTS sources, most in the 10…20 ms RTT range. I also sprinkle in 1…2 low-RTT (~5 ms) unauthenticated sources. Resist the temptation to use low RTT servers whose clock is off or jitters.
Use non-NTS sources sparingly (at most 2) and only if there are not enough low-latency, high-quality NTS servers available. Your server will be redistributing that time as “authentic”, so make sure you vetted the sources accordingly. (And with non-NTS sources, you may never be sure of whether someone along the path may change the time recorded in the packets, as their contents are unauthenticated.) [Added paragraph 2022-02-22]
Anycast sources are great, if you “just” need the time. The anycast IP address you talk to may resolve to a different server every time. If you want to redistribute quality time, using an anycast server may turn out to be unpredictable or add jitter. Furthermore, Cloudflare will also load-balance your requests among different servers at the same location. Two of my machines are connected to the same ISP at the same location. However, they get time consistently differing by about 2 ms, the red line in the graph. Also, the response they get from Cloudflare indicates a different internal time source.
Having a local GNSS receiver (or DCF77 with phase modulation or …) is recommended. Due to infrastructure limitations, my GNSS receiver is was not at the same location as the main time servers, but behind a cable modem line. This results in the blue jittery line in the graph above. So try to avoid this setup (and, yes, I am working on a better solution as well…). [Updated paragraph 2022-02-22]
I am pretty satisfied with the current result:
- The servers have at most a few 100 µs of offset and jitter to their active neighbors.
- To all other monitored servers, even over long-distance connections, Chrony believes they are still within 5 ms, even worst-case. (Monitoring is run on a VM, which adds additional jitter. But this is good enough for a quick overview [Added VM disclaimer 2022-02-22].)
I hope that fine-tuning will improve jitter even further. I plan to move the GPS receiver to the same network as the main servers, away from behind the jittery cable modem connection. Due to antenna placement issues, this requires some construction, so it probably will happen only in a few weeks time.
Stratum-2 configuration example
Fiddling with GPS receivers is tricky. So you may want to start serving Stratum-2 time first, i.e., from a server without a local reference clock.
# Some close-by NTS serversserver ntp.trifence.ch iburst ntsserver ntp.zeitgitter.net iburst ntsserver time.cloudflare.com iburst ntsserver ntp.3eck.ch iburst nts# Up to 2 (to avoid them having a quorum)# close non-NTS servers for stability,# if not enough close NTS servers are availableserver ntp13.metas.ch iburst# A few more servers, monitoring/comparison onlyserver time2.uni-konstanz.de iburst noselectserver d.st1.ntp.br iburst nts noselectserver nts.netnod.se iburst nts noselectserver ptbtime3.ptb.de iburst nts noselectserver nts.time.nl iburst nts noselect
Stratum-1 configuration example
I use a Raspberry Pi 2B with GPS HAT and an active antenna, running Chrony. The clock is configured as follows (in addition to the fallback NTS servers):
refclock SHM 0 offset 0.5 delay 0.5 refid NMEArefclock PPS /dev/pps0 refid GNSS lock NMEA prefer trust
Detailed instructions for a set up from scratch can be found on the GPSd pages, from Patrick O’Keeffe, or 0048ba. Also consider reducing Ethernet latency on the Raspberry Pi.
Improving Monitoring
[Added 2022-01-08] The NTP Pool monitoring situated in Los Angeles is known to rarely report false positives for (at least) European sites. Issues seem to be filtering/rate limiting by the ISPs or transient connectivity problems. A second opinion may be helpful, to differentiate between spurious warnings and actual problems. The preferred option would be for everyone to have their own monitoring, so you could also check your peers’. A quicker alternative might be to register with the development/test version of the NTP pool for monitoring purposes. This is not guaranteed to be up or reliable, even though it is in practice. One of the test monitors sits in Amsterdam, better for European sites.
Network latency and jitter from Amsterdam to European NTP servers is obviously better. However, it seems that the local clock of the test servers has higher drift.
Network Time Security: NTS articles overview
NTP, the Network Time Protocol, is the way most computers and mobile devices obtain their time through. NTS (Network Time Security) is to NTP what HTTPS is to HTTP. It also is as easy to upgrade as upgrading to HTTPS is these days: No effort for the client, just adding a certificate for the server. More details in the following articles:NTS Series
- Transparent, Trustworthy Time with NTP and NTS: A motivation and introduction. Also refers to a list of official, semi-official and community NTS servers.
- Configuring an NTS-capable NTP server: What to do and why.
- NTS and dynamic IP addresses: What to take care of when running an NTS server behind a dynamic IP address.
- Debugging NTS problems: Know-how and tools for detecting and fixing strange behavior.
- Automatically restarting chronyc after a certificate change.
- Public NTS server list [Added 2023-08-12]
Our public NTS+NTP servers
ntp.trifence.ch
[url=https://ntp.zeitgitter.net]ntp.zeitgitter.net[/url]
- Other servers
The above links point to information and statistics of the servers. Independent statistics are also provided by the NTP Pool project (production servers, closer test servers).
More time servers needed!
NTS is a relatively young protocol, with only a small selection of NTS-capable time servers currently available. If you are interested in reliable time, please consider upgrading your existing NTP time server to also support NTS. It would be great if official timekeeping sites, universities, and companies around the world would join, in addition to volunteers.Public NTS Server List
NTS (Network Time Security) is to NTP (Network Time Protocol) essentially what HTTPS is to HTTP: It provides authenticity of the information. Unlike HTTPS, NTS does not provide any confidentiality, as the current time is public information.Netfuture: The future is networked
Generating Multi-Architecture Docker Images Made Easy
Docker has become my favorite virtualization technique. It provides a high level of abstraction with clear interfaces.
Docker is cool and portable, so creating your own Docker images is tempting. The user can get at your Docker image in two ways:
- Distributing
Dockerfile
s is the smallest thing to distribute. It may come in just a few lines of code how to build the image. However, this method comes at a disadvantage: The demands on the build environment are high. - Distributing Docker Images solves this, and Docker Hub and other platforms help you distribute your images. But have you tried building them for an architecture other than your native CPU architecture? This seems extremely complicated and not well documented for anyone interested in just getting it running.
Table of Contents
- Creating Multi-Arch Docker Images
- That’s not all, folks!
- Automating the process: Makefile
- Happy Multi-Architecture Coding!
- The fine print
Creating Multi-Arch Docker Images
So here is a simple recipe:
- Make sure your host can execute binaries for all kinds of architectures:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
This uses a Docker image to tell Linux that it can run 29 additional CPU architectures. Of course, this comes at a cost: Emulation using QEMU. This means, things will run much slower. But hey, they will run at all!
This is the precondition for building Docker images for a different architecture. - Install
buildx
, the Docker build extension:
Download thebuildx
for your CPU from github.com/docker/buildx/relea… and save it as~/.docker/cli-plugins/docker-buildx
. - Create a multi-architecture builder:
docker buildx create --name docker-multiarch && docker buildx inspect --builder docker-multiarch --bootstrap
- Build your multi-architecture Docker Image!
docker buildx build --builder docker-multiarch --platform linux/amd64,linux/386,linux/arm64,linux/arm/v6 <your-docker-dir>
This gives you Docker images for the named --platform
s. Of course, your selection might differ, but my use case, the Zeitgitter timestamping server, I believe that Intel/AMD servers and Raspberry Pis will be the main deployment platforms.
That’s not all, folks!
However, having the images built inside the docker-multiarch container is not going all the way. You need to make it available, either locally, or in a Docker repository. If you want to do the latter, add --push -t <repository-name>
to the command line. But now, that’s really all, folks!
Automating the process: Makefile
The following excerpt from a Makefile
shows how to automate this: [Updated 2023-10-10: Support pre-packaged buildx as well]
# Modify these according to your needsPLATFORMS = linux/amd64,linux/arm64,linux/arm/v6TAG = zeitgitter/zeitgitterDOCKERDIR = zeitgitter# This probably should remain as isBUILDXDETECT1 = ${HOME}/.docker/cli-plugins/docker-buildxBUILDXDETECT2 = /usr/libexec/docker/cli-plugins/docker-buildx.PHONY: qemu buildx docker-multiarch-builderdocker-multiarch: qemu buildx docker-multiarch-builderdocker buildx build --pull --push --platform ${PLATFORMS} -t ${TAG} ${DOCKERDIR} qemu:/proc/sys/fs/binfmt_misc/qemu-m68k/proc/sys/fs/binfmt_misc/qemu-m68k:docker run --rm --privileged multiarch/qemu-user-static --reset -p yesbuildx:@if [ ! -x ${BUILDXDETECT1} -a ! -x ${BUILDXDETECT2} ]; then \echo '*** `docker buildx` missing. See `github.com/docker/buildx#insta… \exit 1; \fidocker-multiarch-builder: qemu buildxif ! docker buildx ls | grep -w docker-multiarch > /dev/null; then docker buildx create --name docker-multiarch && docker buildx inspect --builder docker-multiarch --bootstrap; fi
Yes, the process of downloading and installing buildx
could also be automated, but it isn’t as easily reversible, so I leave it to the user. The entire Makefile can be found here.
Happy Multi-Architecture Coding!
The fine print
I learnt a lot from these writeups:
- Docker themselves explain in their Documentation how to create Multi-Arch Images for the Mac. At first glance, this did not seem like it would be that helpful, but indeed, this was the key to putting it all together, from what I had been reading before.
- Adrian Mouat has a great writeup in the Docker Blog: Multi-Platform Docker Builds. I learnt a lot there, but it was missing the information that you do not need to deal with Manifests yourselves, but that
buildx
handles this transparently for you. Thanks, buildx folks! - Christy Norman explains Manifests in detail here. But you probably don’t need to deal with Manifests yourselves anymore.
- A more “manual” QEMU-based build procedure is described by Balena in “Building ARM containers on any x86 machine, even DockerHub”. Hey, we’ve gone a long way these past three years!
If you would like to have more insight into what happens or how to go on even further, you will find a lot of information in the above articles.
I also borrowed from the original Docker image and have seen a similar design to what I made from it, but fail to remember where I found it. If anyone has a clue, please remind me, so I can credit the idea.
#Linux #Timestamps #Docker #ARM #MultiPlatform #MultiArchitecture #Zeitgitter
Docker: Accelerated Container Application Development
Docker is a platform designed to help developers build, share, and run container applications. We handle the tedious setup, so you can focus on the code.Simeon Ratliff (Docker)
Kryptospenden: Weder anonym noch folgenlos
Die Zombie-Apokalypse wird immer falsch dargestellt. Die Zombies wollen gar nicht die Menschen, sondern nur ihre Daten. Ehrlich!
«Ich habe nichts zu verbergen!» ist die Standardantwort, wenn es um den Schutz der eigenen Daten geht. Trotzdem nutzen wir alle Vorhänge, Passwörter, Türen und Briefumschläge.
Mehr zum Thema Blockchain und Kryptowährungen hier.
Für den seit Jahren in der Schweiz lebenden Informatiker Ihar (Name geändert) aus Belarus hat sich das in den letzten Wochen drastisch geändert.
Seine Geschichte und dass sie eigentlich nur ein Beispiel für eine viel tiefgreifendere Entwicklung ist, habe ich für DNIP recherchiert.
Mehr zu Blockchain
Der grosse Blockchain-Überblick und die drei neuesten Artikel zum Thema:
Frauenfussball-NFT: Doppelt falsch macht es nicht besser2023-07-19
Anlässlich der anstehende Fussball-WM der Frauen läuft gerade eine gross angelegte Werbekampagne für NFTs mit unseren Nati-Spielerinnen. NFTs stehen technisch und… Frauenfussball-NFT: Doppelt falsch macht es nicht besser weiterlesen
18 Gründe, wieso NFT unethisch sind2023-07-18
Wer regelmässig meine Artikel liest, weiss, dass ich aus technischer Sicht der Meinung bin, dass NFT noch Smart Contracts das halten,… 18 Gründe, wieso NFT unethisch sind weiterlesen
Ineffizienz ist gut (manchmal)2023-07-15
Bürokratie und Ineffizienz sind verpönt, häufig zu recht. Doch sie haben auch ihre guten Seiten: Richtig angewandt sorgen sie für Verlässlichkeit… Ineffizienz ist gut (manchmal) weiterlesen
#Blockchain #Datenschutz #Demokratie #InformatikUndGesellschaft
Metaverse und NFT erklärt
🎦 ➡️ Der Vortrag zu NFT und Metaverse sowie der öffentliche Teil der Fragerunde wurde aufgezeichnet und kann hier nachgeschaut werden.
Am 15. November 2022 findet ein Netzpolitischer Abend der Digitalen Gesellschaft statt. Nach einer kurzen Einführung von mir in das Themengebiet inklusive Vor- und Nachteilen sind alle zu einer regen Diskussion eingeladen. Es gibt auch eine Bar mit Getränken mit und ohne Alkohol.
Mehr zu Blockchain
Der grosse Blockchain-Überblick und die drei neuesten Artikel zum Thema:
Frauenfussball-NFT: Doppelt falsch macht es nicht besser2023-07-19
Anlässlich der anstehende Fussball-WM der Frauen läuft gerade eine gross angelegte Werbekampagne für NFTs mit unseren Nati-Spielerinnen. NFTs stehen technisch und… Frauenfussball-NFT: Doppelt falsch macht es nicht besser weiterlesen
18 Gründe, wieso NFT unethisch sind2023-07-18
Wer regelmässig meine Artikel liest, weiss, dass ich aus technischer Sicht der Meinung bin, dass NFT noch Smart Contracts das halten,… 18 Gründe, wieso NFT unethisch sind weiterlesen
Ineffizienz ist gut (manchmal)2023-07-15
Bürokratie und Ineffizienz sind verpönt, häufig zu recht. Doch sie haben auch ihre guten Seiten: Richtig angewandt sorgen sie für Verlässlichkeit… Ineffizienz ist gut (manchmal) weiterlesen
Per Anhalter durch die Blockchain: Ein Überblick
Eine englische 🇬🇧 Version gibt es hier. Zusammenfassung Eine Blockchain ist eine Liste von Datenblöcken, bei der eine (möglicherweise offene) Gruppe von Berechtigten neue Datenblöcke vorschlagen kann, welche dann im Konsens hinzugef…Marcel Waldvogel
📹 Netzpolitischer Abend zu Metaverse und NFT zum Nachschauen
Für alle, die den Abend vor Ort verpasst haben: Die Videoaufzeichnungen des gestreamten Teils des Netzpolitischen Abends über Metaverse und NFT sind inzwischen verfügbar.
Links:
- Link zum Post bei der Digitalen Gesellschaft inklusive Fotogalerie
- Link zum Video
- Ankündigung
- Mehr Informationen von mir zu NFTs und Blockchains
media.ccc.de/v/dgna-4243-metav…
Mehr zu Blockchain
Der grosse Blockchain-Überblick und die drei neuesten Artikel zum Thema:
Frauenfussball-NFT: Doppelt falsch macht es nicht besser2023-07-19
Anlässlich der anstehende Fussball-WM der Frauen läuft gerade eine gross angelegte Werbekampagne für NFTs mit unseren Nati-Spielerinnen. NFTs stehen technisch und… Frauenfussball-NFT: Doppelt falsch macht es nicht besser weiterlesen
18 Gründe, wieso NFT unethisch sind2023-07-18
Wer regelmässig meine Artikel liest, weiss, dass ich aus technischer Sicht der Meinung bin, dass NFT noch Smart Contracts das halten,… 18 Gründe, wieso NFT unethisch sind weiterlesen
Ineffizienz ist gut (manchmal)2023-07-15
Bürokratie und Ineffizienz sind verpönt, häufig zu recht. Doch sie haben auch ihre guten Seiten: Richtig angewandt sorgen sie für Verlässlichkeit… Ineffizienz ist gut (manchmal) weiterlesen
#Blockchain #InformatikUndGesellschaft #Metaverse #NFT
Metaverse und NFT
Vor Kurzem hat die Pro Senectute beider Basel ihren «Swiss Crypto Marvels»-NFT gestartet und kurz davor auch ihr Metaverse-Engagement bek...media.ccc.de
Metaverse und NFT erklärt
🎦 ➡️ Der Vortrag zu NFT und Metaverse sowie der öffentliche Teil der Fragerunde wurde aufgezeichnet und kann hier nachgeschaut werden.Am 15. November 2022 findet ein Netzpolitischer Abend der Digitalen Gesellschaft statt. Nach einer kurzen Einführung von mir in das Themengebiet inklusive Vor- und Nachteilen sind alle zu einer regen Diskussion eingeladen. Es gibt auch eine Bar mit Getränken mit und ohne Alkohol.
Mehr zu Blockchain
Der grosse Blockchain-Überblick und die drei neuesten Artikel zum Thema:
Frauenfussball-NFT: Doppelt falsch macht es nicht besser2023-07-19
Anlässlich der anstehende Fussball-WM der Frauen läuft gerade eine gross angelegte Werbekampagne für NFTs mit unseren Nati-Spielerinnen. NFTs stehen technisch und… Frauenfussball-NFT: Doppelt falsch macht es nicht besser weiterlesen
18 Gründe, wieso NFT unethisch sind2023-07-18
Wer regelmässig meine Artikel liest, weiss, dass ich aus technischer Sicht der Meinung bin, dass NFT noch Smart Contracts das halten,… 18 Gründe, wieso NFT unethisch sind weiterlesen
Ineffizienz ist gut (manchmal)2023-07-15
Bürokratie und Ineffizienz sind verpönt, häufig zu recht. Doch sie haben auch ihre guten Seiten: Richtig angewandt sorgen sie für Verlässlichkeit… Ineffizienz ist gut (manchmal) weiterlesen
#Blockchain #InformatikUndGesellschaft #Metaverse #NFTPer Anhalter durch die Blockchain: Ein Überblick
Eine englische 🇬🇧 Version gibt es hier. Zusammenfassung Eine Blockchain ist eine Liste von Datenblöcken, bei der eine (möglicherweise offene) Gruppe von Berechtigten neue Datenblöcke vorschlagen kann, welche dann im Konsens hinzugef…Marcel Waldvogel
Hype-Tech
Wieso tauchen gewisse Hype-Themen wie Blockchain oder Maschinelles Lernen/Künstliche Intelligenz regelmässig in IT-Projekten auf, obwohl die Technik nicht wirklich zur gewünschten Lösung passt? Oder es auch einfachere, bessere Ansätze gäbe?
Felix von Leitner hielt im Herbst 2021 einen Vortrag zu «Hype-Tech», in dem er einige Gründe dafür aufzeigt (Navigation in den Folien mittels Wischgesten oder Cursortasten).
Den Vortrag gibt es hier:
Die wichtigsten Erkenntnisse:
- Es wird oft auf die falschen Ziele hin optimiert: Persönliches, Karriere, PR, Nutzen für Consultingunternehmen, … statt Businessnutzen.
- Oft liegt es (auch) daran, dass die Technologie nicht verstanden wird, sondern nur die Hochglanzversion der Hersteller/Verkäufer gelesen wurde.
- Und man will ja nicht als dumm/hinterwäldlerisch/… gelten, weil man die Technologie — im Gegensatz zu scheinbar allen anderen — nicht revolutionär findet. [Irgendwo müsste hier doch noch ein Exemplar von «Des Kaisers neue Kleider» herumliegen… Wo habe ich es bloss hingelegt‽]
Inhalt
Schnelleinstieg
Einige wichtige Folien im Vortrag kann man hier direkt anspringen:
- Start
- Der Kategorie-2-Vortrag
- Fallstudie 1: Blockchain (Schlussfolgerung)
- Fallstudie 2: Smart Contracts (Schlussfolgerung)
- Fallstudie 3: Big Data (Schlussfolgerung)
- Fallstudie 4: Machine Learning («Wir wissen nicht, was der gelernt hat», Eigeninteresse/BaFin, Spamfilter)
- Zusammenfassung 1 und 2
Wieso?
Diesen Blogpost gibt es vor allem, weil ich es Leid war, bei jeder Referenz auf den grossartigen Vortrag die Phrase «Navigation in den Folien mittels Wischgesten oder Cursortasten» hinschreiben zu müssen. Ihr glaubt nicht, wie viele Rückmeldungen der Form: «Da ist ja gar nichts!» ich auf das Teilen des Links bekam.
Abgesehen davon ist dieser Vortrag ein Muss für jeden Stakeholder in einem IT-Projekt und kann gar nicht genug propagiert werden.
Data lifecycle questions, not only for Blockchains
In any data-centric applications, understanding the data lifecycle (also as part of the product lifecycle) is important, especially when trust or traceability are also goals. If Blockchain should play a role, then the requirements associated with the data lifecycle can even become a decisive factor: Either adapt the data model or the processes, or, if that is impossible, limit Blockchain use to selected areas or entirely abolish it.
More on blockchain in the simple, humorous, yet thorough article «Hitchhiker’s Guide to the Blockchain». A German 🇩🇪 version of this article, entitled “Fragen zum Datenlebenszyklus, nicht nur für Blockchains“, is also available.DHS S&TD Blockchain decision diagram. (Image source: After NIST IR 8202: Blockchain Technology Overview, 2018; page 42, PDF page 53)
From my experience, the following questions are key to understanding the lifecycle if trust, transparency, and/or traceability are involved. A more detailed list is below and can also be downloaded as a PDF.
- Data: Which data with which properties are we dealing with (structure, dependencies, data security/protection)?
- Criteria: Are integrity, traceability, immutability, confidentiality, or global consensus required?
- Input: What are the data sources? How does data get into the system?
- Quality: How is quality assured (korrectness, uniformity, authenticity, completeness, …)?
- Culture of failure: What can/may/must happen, if erroneous data finds its way into the system (accidentally or maliciously)?
- Data protection: How to deal with requests for data correction/deletion?
- Format changes: How should future changes in format or structure be handled?
- Processing: What are the data processing steps?
- Trust: Might there be persistent mistrust among actors? Can this mistrust be managed by hierarchies or contracts?
- Output: What should be the results of this data?
- Effects: What will (automatically) happen based on this data?
- End of life: Is there an end-of-life for the data? What should happen then?
- Universality: Do all records, fields, and relationships share the same of the above properties?
The answers to this list (or to the questionnaire) can then be used as input to the flow diagram to the right.
#DataProtection #ResearchDataManagement #Blockchain #DataLifecycle
Hitchhiker’s Guide to the Blockchain: An Overview
An overview over information related to blockchain technology. A German 🇩🇪 version with more context is available here.Netfuture: The future is networked
Mother of Vegan Child in Mallorca Permitted to Adapt Her Child’s Menu in School Canteen
Mother of Vegan Child in Mallorca Permitted to Adapt Her Child’s Menu in School Canteen | Vegan FTA
A mother in the Spanish autonomous community of the Balearic Islands who fought for her five-year-old son to be able to eat vegan food at school has managed to…Jordi Casamitjana (Vegan FTA)
‘A Victory of Common Sense’: France Will No Longer Ban ‘Meat’ Words for Plant-Based Products
'A Victory of Common Sense': France Will No Longer Ban 'Meat' Words for Plant-Based Products
In January 2025, France's Council of State put a stop to any bans on "meat" language on plant-based labels in France.Charlotte Pointing (VegNews)
New study links red meat to faster cognitive decline - If people substituted processed red meat protein for that found in nuts, tofu or beans, they could reduce their dementia risk by 19%.
New study links red meat to faster cognitive decline
Eating processed red meat, like bacon and sausages, is linked to a 16% higher dementia risk. This alarming finding highlights the importance of dietary choices in protecting cognitive function as we age.Eef Hogervorst and Emma D'Donnell (PsyPost)
Social Web FOSDEM Wrap-up and Next Steps
It’s been more than a week since the last event at #SocialWebFOSDEM, and I’m just now getting to writing up some summary thoughts and follow-on notes. We had three big planned events at FOSDEM, so there is an overwhelming amount to think about, and take-aways to discuss. The biggest take-away for me was: we needed this. The Social Web developer community came together in person for the first time in a while, and it was remarkably fruitful.
For those who weren’t able to attend in person, or who want to go back and review or share great talks, there are a few options. First, the Social Web Devroom talks were all recorded with audio and video, some of which have begun appearing on the FOSDEM web site; check the specific talks for details. Most of the schedule entries have attached slides in PDF format, also.
The Social Web BOF was not recorded in video, but most of the talks have published slides. Benjamin Bellamy of Castopod also thoughtfully recorded audio for the talks, so I will try to slice them up into listenable chunks, one for each talk. The BOF talks were some of my favourites, so I’m looking forward to getting these released.
Finally, the Social Web After Hours event was not recorded at all, but some of the presenters have released their slides. Julian Lam’s talk, “The Fediverse is Quiet“, was re-recorded for you to view. This event was a great way to end the weekend, with a much more casual and intimate vibe than the lecture rooms at ULB. A fun time.
Lastly, I’ve organized a FOSDEM debrief for speakers at all of the events for this week. If you were at FOSDEM and you’d like to attend, comment here or DM me at @evan .
Social Web After Hours at FOSDEM 2025
The Social Web Foundation and Hackerspace Brussels (HSBXL) are co-hosting an off-site event at FOSDEM 2025 in Brussels, Belgium on Sunday, February 2, 2025 from 19:00 to 21:00 local time. Social Web After Hours will feature four Fediverse-focused presentations from leaders of the ActivityPub community:
- Darius Kazemi will discuss the Fediverse Observatory
- Christine Lemmer-Webber and Jessica Tallon will discuss their work at the Spritely Institute
- Julian Lam will discuss NodeBB and using ActivityPub for threaded discussions
- Matthias Pfefferle will present the ActivityPub plugin for WordPress
The event is open to the public, but space is limited. HSBXL is at Rue Osseghem 53, 1080 Molenbeek, Brussels, Belgium. Light food and drink available for purchase at the event; proceeds benefit HSBXL.
2025 – 041: Monatsgedicht
Februar, japanisch kurz, charakteristisch. #TankaToGo
Ich mag diese kurzen Formen fernöstlicher Poesie. Beim Haiku halte ich mich gern an die inhaltlichen Vorgaben, die es dafür gibt […]
#Bach #Eis #Februar #Frühling #Gedicht #Poesie #Tanka #Wasser
deremil.blogda.ch/2025/02/10/0…
2025 – 041: Monatsgedicht
Februar, japanisch kurz, charakteristisch. #TankaToGo Ich mag diese kurzen Formen fernöstlicher Poesie. Beim Haiku halte ich mich gern an die inhaltlichen Vorgaben, die es dafür gibt […]GeDACHt | Geschrieben | Erlebt | Gesehen
teilten dies erneut
stachelvieh hat dies geteilt.
cron
Als Antwort auf RunOblomovRun • • •