it is who turned up who made it - it was good to see how we can keep going in the face of everything, roll on February indeed !!! JarrahTree00:49, 16 January 2016 (UTC)[reply]
Thanks for the heads up! :) See you then then. Oh and it's odd that you're not allowed to redirect a user talk page isn't it? Ah well. I'll put it back how it was. SamWilson06:18, 22 January 2016 (UTC)[reply]
nah it is just me, I dont know what the official version is... looks odd, but hey, you freo people always do things different hey! :) JarrahTree12:41, 22 January 2016 (UTC)[reply]
Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Tom Bateman, you added a link pointing to the disambiguation page Donnybrook. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.
Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Haralampi Perev, you added a link pointing to the disambiguation page Macedonia. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.
Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Jade Dolman, you added a link pointing to the disambiguation page Irish. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.
Wikimania 2016 is almost here! Mjohnson (WMF) and I are running two workshops for IdeaLab during the conference, and you are invited to join us for either (or both!)
If you have a proposal or idea you are thinking about, and would like a space to work on it on your own or with others, please consider joining us for either the Thursday or Saturday sessions. We'll discuss a little about IdeaLab and how it works, and the rest of the time is space for idea building. You can also use this session to ask questions about Wikimedia Foundation grants that are available if your proposal or idea may need funding. Thanks, and see you at the conference! I JethroBT (WMF) (talk) 20:45, 19 June 2016 (UTC)[reply]
I'm very glad that there's more wikisources working with proofreadpage! Good to meet you. Hope your travels are going/gone well. :-) —SamWilson05:21, 28 June 2016 (UTC)[reply]
New portal on meta outlining various Perth/WA activities related to wikimedia projects. Any ideas for it, or help in adding/editing/updating it, would be appreciated. - Evad37[talk]09:06, 22 August 2016 (UTC)[reply]
I saw your question about the NR Class and its fuel efficiency. Your best bet is to post at rail page.com.au and see what answer you get. Lots of knowledgable people there.Jamesbushell.au (talk) 11:29, 8 November 2016 (UTC)[reply]
Hello, Samwilson. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
Note: All columns in this table are sortable, allowing you to rearrange the table so the articles most interesting to you are shown at the top. All images have mouse-over popups with more information. For more information about the columns and categories, please consult the documentation and please get in touch on SuggestBot's talk page with any questions you might have.
SuggestBot picks articles in a number of ways based on other articles you've edited, including straight text similarity, following wikilinks, and matching your editing patterns against those of other Wikipedians. It tries to recommend only articles that other Wikipedians have marked as needing work. Your contributions make Wikipedia better — thanks for helping.
This is the very first newsletter sent by mass mail to members in Wikipedia:WikiProject Genealogy, to everyone who voted a support for establishing a potential Wikimedia genealogy project on meta, and anyone who during the years showed an interest in genealogy on talk pages and likewise.
(To discontinue receiving Project Genealogy newsletters, see below)
The future of the Genealogy project on the English Wikipedia, and a potential creation of a new Wikimedia Genealogy Project, is something where you can make a an input.
This page contains the following errors:
error on line 26 at column 37: Encoding error
Below is a rendering of the page up to the first error.
The Signpost https://en.wikipedia.org/w/index.php?title=Category%3AWikipedia+Signpost+RSS+feed Wikipedia:Wikipedia Signpost/2017-02-06/Traffic report https://en.wikipedia.org/w/index.php?curid=53078760
Okay, it should be fixed now. It was breaking a character in the middle, resulting in an invalid character. Switched to using mb_substr for producing the description (first 400 characters of the post). I think a better system for description could be devised! Probably should get around to supporting https://schema.org/BlogPosting but that's more work. :-) Let me know how it goes now. Oh, and great idea there of making an explicit feed category! SamWilson03:10, 24 February 2017 (UTC)[reply]
Hey Sam, it would be awesome if we could get better descriptions for the RSS feed – that would allow the Signpost feed to be republished on sites such as m:Planet Wikimedia. If it's not too much work, can you look into that BlogPosting schema you mentioned? Or is there way to get the tool to grab content that is hidden from humans, like within a {{void}} template, or in a <div style="display:none;"> element? - Evad37[talk]02:04, 26 February 2017 (UTC)[reply]
@Evad37: Okay, so you should now be able to define a description by adding a <span itemprop="description">...</span> (can be a div or whatever else as well). SamWilson00:47, 27 February 2017 (UTC)[reply]
This is the second newsletter sent by mass mail to members in Wikipedia:WikiProject Genealogy, to everyone who voted a support for establishing a potential Wikimedia genealogy project on meta, and anyone who during the years showed an interest in genealogy on talk pages and likewise.
(To discontinue receiving Project Genealogy newsletters, please see below)
Progress report:
In order to improve communication between genealogy interested wikipedians, as well talking in chat mode about the potential new wiki, a new irc channel has been setup, and you are welcome to visit and try it out at: #wikimedia-genealogyconnect
(In case you are not familiar with IRC, or would prefer some info and intro, please see Wikipedias IRC tutorial)
I am kicking myself I didnt ask him to go out just a little beyond the north mole a bit - will have to go to rottnest just for that sometime soon i think JarrahTree10:29, 12 March 2017 (UTC)[reply]
wow - one your fellow tech geeks has made my life on wp better again - I can see all of the edit summary text in the box for the first time in months !!! yippee JarrahTree10:41, 24 March 2017 (UTC)[reply]
This is the third newsletter sent by mass mail to members in Wikipedia:WikiProject Genealogy, to everyone who voted a support for establishing a potential Wikimedia genealogy project on meta, and anyone who during the years showed an interest in genealogy on talk pages and likewise.
(To discontinue receiving Project Genealogy newsletters, please see below)
Request:
In order to improve communication between genealogy interested wikipedians, as well as taking new, important steps towards a creation of a new project site, we need to make communication between the users easier and more effective.
At Mail list on meta is discussed the possibility of creating a genealogy-related Wikimedia email list. In order to request the creation of such a list, we need your voice and your vote.
In order to create a new list, we need to put a request it in Phabricator, and add a link to reasoning/explanation of purpose, and link to community consensus. Therefore we need your vote for this now, so we can request the creation of the mail list.
Read more about this email list at Meta; Wikimedia genealogy project mail list where you can support the creation of the mail list with your vote, in case you haven't done so already.
This is the fourth newsletter sent by mass mail to members in Wikipedia:WikiProject Genealogy, to everyone who voted a support for establishing a potential Wikimedia genealogy project on meta, and anyone who during the years showed an interest in genealogy on talk pages and likewise.
(To discontinue receiving Project Genealogy newsletters, please see below)
Mail list is created:
The project email list is now created and ready to use!
@Gderrin: no worries! :-) I'm glad it's useful. I hope I got it all right. And I tried to track down the source for the mis-speling of Alwyn/Alwin but couldn't find it (I also added a redirect). SamWilson10:07, 26 June 2017 (UTC)[reply]
Yep - fading memory. Sometimes wrote Alwyn, others Alwin (mostly the latter tho). Will gradually go back, correct and wikilink the ones I missed. Making a couple of small changes to the Clements page. Gderrin (talk) 10:18, 26 June 2017 (UTC)[reply]
Hi Samwilson! We're so happy you wanted to play to learn, as a friendly and fun way to get into our community and mission. I think these links might be helpful to you as you get started.
Hello! I understand that you are a maintainer of Xtools. I'm looking for the documentation for the Xtools timecard, which is part of the general user statistics. I haven't been able to find it on toolforge, Phabricator, or elsewhere! I'm hoping to look at the documentation, potentially contribute, and also look into what data I can get from it. Any info you can provide on where to find the timecard documentation would be appreciated! Hexatekin (talk) 01:07, 7 September 2017 (UTC)[reply]
Help design a new feature to stop harassing emails
Hi there,
The Anti-Harassment Tools team plans to start develop of a new feature to allow users to restrict emails from new accounts. This feature will allow an individual user to stop harassing emails from coming through the Special:EmailUser system from abusive sockpuppeting accounts.
Please let us know if you wish to opt-out of all massmessage mailings from the Anti-harassment tools team.
Facto Post – Issue 5 – 17 October 2017
Facto Post – Issue 5 – 17 October 2017
Editorial: Annotations
Annotation is nothing new. The glossators of medieval Europe annotated between the lines, or in the margins of legal manuscripts of texts going back to Roman times, and created a new discipline. In the form of web annotation, the idea is back, with texts being marked up inline, or with a stand-off system. Where could it lead?
ContentMine operates in the field of text and data mining (TDM), where annotation, simply put, can add value to mined text. It now sees annotation as a possible advance in semi-automation, the use of human judgement assisted by bot editing, which now plays a large part in Wikidata tools. While a human judgement call of yes/no, on the addition of a statement to Wikidata, is usually taken as decisive, it need not be. The human assent may be passed into an annotation system, and stored: this idea is standard on Wikisource, for example, where text is considered "validated" only when two different accounts have stated that the proof-reading is correct. A typical application would be to require more than one person to agree that what is said in the reference translates correctly into the formal Wikidata statement. Rejections are also potentially useful to record, for machine learning.
As a contribution to data integrity on Wikidata, annotation has much to offer. Some "hard cases" on importing data are much more difficult than average. There are for example biographical puzzles: whether person A in one context is really identical with person B, of the same name, in another context. In science, clinical medicine requires special attention to sourcing (WP:MEDRS), and is challenging in terms of connecting findings with the methodology employed. Currently decisions in areas such as these, on Wikipedia and Wikidata, are often made ad hoc. In particular there may be no audit trail for those who want to check what is decided.
Annotations are subject to a World Wide Web Consortium standard, and behind the terminology constitute a simple JSON data structure. What WikiFactMine proposes to do with them is to implement the MEDRS guideline, as a formal algorithm, on bibliographical and methodological data. The structure will integrate with those inputs the human decisions on the interpretation of scientific papers that underlie claims on Wikidata. What is added to Wikidata will therefore be supported by a transparent and rigorous system that documents decisions.
An example of the possible future scope of annotation, for medical content, is in the first link below. That sort of detailed abstract of a publication can be a target for TDM, adds great value, and could be presented in machine-readable form. You are invited to discuss the detailed proposal on Wikidata, via its talk page.
Under the heading rerum causas cognescere, the first ever Wikidata conference got under way in the Tagesspiegel building with two keynotes, One was on YAGO, about how a knowledge base conceived ten years ago if you assume automatic compilation from Wikipedia. The other was from manager Lydia Pintscher, on the "state of the data". Interesting rumours flourished: the mix'n'match tool and its 600+ datasets, mostly in digital humanities, to be taken off the hands of its author Magnus Manske by the WMF; a Wikibase incubator site is on its way. Announcements came in talks: structured data on Wikimedia Commons is scheduled to make substantive progress by 2019. The lexeme development on Wikidata is now not expected to make the Wiktionary sites redundant, but may facilitate automated compilation of dictionaries.
And so it went, with five strands of talks and workshops, through to 11 pm on Saturday. Wikidata applies to GLAM work via metadata. It may be used in education, raises issues such as author disambiguation, and lends itself to different types of graphical display and reuse. Many millions of SPARQL queries are run on the site every day. Over the summer a large open science bibliography has come into existence there.
Hello, Samwilson. Voting in the 2017 Arbitration Committee elections is now open until 23.59 on Sunday, 10 December. All users who registered an account before Saturday, 28 October 2017, made at least 150 mainspace edits before Wednesday, 1 November 2017 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
Hi. Thank you for your recent edits. An automated process has detected that when you recently edited Jack plane, you added a link pointing to the disambiguation page Jack of all trades (check to confirm | fix with Dab solver). Such links are usually incorrect, since a disambiguation page is merely a list of unrelated topics with similar titles. (Read the FAQ • Join us at the DPL WikiProject.)
At the beginning of December, Wikidata items on individual scientific articles passed the 10 million mark. This figure contrasts with the state of play in early summer, when there were around half a million. In the big picture, Wikidata is now documenting the scientific literature at a rate that is about eight times as fast as papers are published. As 2017 ends, progress is quite evident.
Behind this achievement are a technical advance (fatameh), and bots that do the lifting. Much more than dry migration of metadata is potentially involved, however. If paper A cites paper B, both papers having an item, a link can be created on Wikidata, and the information presented to both human readers, and machines. This cross-linking is one of the most significant aspects of the scientific literature, and now a long-sought open version is rapidly being built up.
The effort for the lifting of copyright restrictions on citation data of this kind has had real momentum behind it during 2017. WikiCite and the I4OC have been pushing hard, with the result that on CrossRef over 50% of the citation data is open. Now the holdout publishers are being lobbied to release rights on citations.
But all that is just the beginning. Topics of papers are identified, authors disambiguated, with significant progress on the use of the four million ORCID IDs for researchers, and proposals formulated to identify methodology in a machine-readable way. P4510 on Wikidata has been introduced so that methodology can sit comfortably on items about papers.
This is the fifth newsletter sent by mass mail to members in Wikipedia:WikiProject Genealogy, to everyone who voted a support for establishing a potential Wikimedia genealogy project on meta, and anyone who during the years showed an interest in genealogy on talk pages and likewise.
(To discontinue receiving Project Genealogy newsletters, please see below)
A demo wiki is up and running!
Dear members of WikiProject Genealogy, this will be the last newsletter for 2017, but maybe the most important one!
You can already now try out the demo for a genealogy wiki at https://tools.wmflabs.org/genealogy/wiki/Main_Page and try out the functions. You will find parts of the 18th Pharao dynasty and other records submitted by the 7 first users, and it would be great if you would add some records.
And with those great news we want to wish you a creative New Year 2018!
From the days of hard-copy liner notes on music albums, metadata have stood outside a piece or file, while adding to understanding of where it comes from, and some of what needs to be appreciated about its content. In the GLAM sector, the accumulation of accurate metadata for objects is key to the mission of an institution, and its presentation in cataloguing.
Today Wikipedia turns 17, with worlds still to conquer. Zooming out from the individual GLAM object to the ontology in which it is set, one such world becomes apparent: GLAMs use custom ontologies, and those introduce massive incompatibilities. From a recent article by sadads, we quote the observation that "vocabularies needed for many collections, topics and intellectual spaces defy the expectations of the larger professional communities." A job for the encyclopedist, certainly. But the data-minded Wikimedian has the advantages of Wikidata, starting with its multilingual data, and facility with aliases. The controlled vocabulary — sometimes referred to as a "thesaurus" as term of art — simplifies search: if a "spade" must be called that, rather than "shovel", it is easier to find all spade references. That control comes at a cost.
Case studies in that article show what can lie ahead. The schema crosswalk, in jargon, is a potential answer to the GLAM Babel of proliferating and expanding vocabularies. Even if you have no interest in Wikidata as such, simply vocabularies V and W, if both V and W are matched to Wikidata, then a "crosswalk" arises from term v in V to w in W, whenever v and w both match to the same item d in Wikidata.
For metadata mobility, match to Wikidata. It's apparently that simple: infrastructure requirements have turned out, so far, to be challenges that can be met.
One way of looking at Wikidata relates it to the semantic web concept, around for about as long as Wikipedia, and realised in dozens of distributed Web institutions. It sees Wikidata as supplying central, encyclopedic coverage of linked structured data, and looks ahead to greater support for "federated queries" that draw together information from all parts of the emerging network of websites.
Another perspective might be likened to a photographic negative of that one: Wikidata as an already-functioning Web hub. Over half of its properties are identifiers on other websites. These are Wikidata's "external links", to use Wikipedia terminology: one type for the DOI of a publication, another for the VIAF page of an author, with thousands more such. Wikidata links out to sites that are not nominally part of the semantic web, effectively drawing them into a larger system. The crosswalk possibilities of the systematic construction of these links was covered in Issue 8.
Wikipedia:External links speaks of them as kept "minimal, meritable, and directly relevant to the article." Here Wikidata finds more of a function. On viaf.org one can type a VIAF author identifier into the search box, and find the author page. The Wikidata Resolver tool, these days including Open Street Map, Scholia etc., allows this kind of lookup. The hub tool by maxlath takes a major step further, allowing both lookup and crosswalk to be encoded in a single URL.
Around the time in February when Wikidata clicked past item Q50000000, another milestone was reached: the mix'n'match tool uploaded its 1000th dataset. Concisely defined by its author, Magnus Manske, it works "to match entries in external catalogs to Wikidata". The total number of entries is now well into eight figures, and more are constantly being added: a couple of new catalogs each day is normal.
Since the end of 2013, mix'n'match has gradually come to play a significant part in adding statements to Wikidata. Particularly in areas with the flavour of digital humanities, but datasets can of course be about practically anything. There is a catalog on skyscrapers, and two on spiders.
These days mix'n'match can be used in numerous modes, from the relaxed gamified click through a catalog looking for matches, with prompts, to the fantastically useful and often demanding search across all catalogs. I'll type that again: you can search 1000+ datasets from the simple box at the top right. The drop-down menu top left offers "creation candidates", Magnus's personal favourite. m:Mix'n'match/Manual for more.
For the Wikidatan, a key point is that these matches, however carried out, add statements to Wikidata if, and naturally only if, there is a Wikidata property associated with the catalog. For everyone, however, the hands-on experience of deciding of what is a good match is an education, in a scholarly area, biographical catalogs being particularly fraught. Underpinning recent rapid progress is an open infrastructure for scraping and uploading.