Wiktionary:Grease pit/2008/March

Template:la-verb-form help
OK, so I started trying to write a new template and realized that it's more complicated than I originally thought. The template in question is. The template should require a single argument, namely the verb form with macrons indicated. This should be displayed in bold as the inflection line for non-lemma entries. Here's the catch: When no argument is included, I would like the template to use instead, and to put the article into Category:Latin words needing attention, or better still, Category:Latin verb forms needing attention. --EncycloPetey 20:52, 1 March 2008 (UTC)

Was that what you wanted? Conrad.Irwin 20:57, 1 March 2008 (UTC)
 * Yes, thanks. --EncycloPetey 21:14, 1 March 2008 (UTC)

SAMPA and enPR templates
Currently the template is coded such that giving it multiple parameters results in nicely formatted multiple pronunciations. The and  templates on the other hand ignore all but the first parameter.

Could someone who understands the syntax (I've had a look and it's beyond my understanding) adjust the latter two to work the same way as. See vice versa for where this will be particularly useful. Thanks, Thryduulf 22:53, 1 March 2008 (UTC)


 * I think that is now done, and I have done a bit of other cleanup on the enPR template. Let me know if I have borked everything and I will fix it. Conrad.Irwin 23:17, 1 March 2008 (UTC)


 * appears to work now, but doesn't quite -  produces "enPR: 123" rather than "enPR: 1, 2, 3". Thryduulf 23:23, 1 March 2008 (UTC)


 * Sorry, noob error - should be fixed now. Conrad.Irwin 23:54, 1 March 2008 (UTC)


 * Yes, that's working now. Thanks. Thryduulf 00:41, 2 March 2008 (UTC)

Prototype for Template:nav changes
I've prototyped the changed discussed at WT:BP for at Template:topic cat. It has been put into place on the following categories:


 * Category:ja:Judaism (new category)
 * Category:nci:Games (new category)
 * Category:nci:Recreation (existing category, before, after)
 * Category:lua:*Topics (existing category, before, after)

The prototype allows for defaulting of both descriptions and parent categories, but it also supports all of the parameters currently supported by. If parents are provided explicitly for a categories whose parents are configured already, the parents passed as parameters will be merged with the defaults.

Since all of the existing parameters are supported, there is no need for "normal" editors to deal with setting up the category description templates or parent templates, but we could probably use a bot to find these places and consolidate or report them to avoid duplicating effort for future editors.

Please have a look at the implementation. Mike Dillon 23:35, 1 March 2008 (UTC)
 * Looking good - this is a major improvement and will save us bags of time. I'm sure that a bot could do a lot of the migration of any categories not yet using.
 * It may also be possible to superceed some etymology category nav templates that I created too. That would simplify things a great deal.--Williamsayers79 08:21, 6 March 2008 (UTC)


 * I'll take a look at them when I get some time. I've also listed some open issues and possible transition plan at User:Mike Dillon/Topics. Mike Dillon 20:01, 6 March 2008 (UTC)

categorization
I have changed it to handle plurals of POS ending in -x (suffix, etc). But also eliminated the #ifexist calls to check for the cats, it just assumes (language) (POS) exists. (If #ifexist had not been broken, it would be okay, but it now adds to the links table. Quite uselessly, since the job queue does not update pages when a cat is created anyway!)

This references cats we may or may not want Category:Translingual abbreviations?

Also suppresses comma before "or". Robert Ullmann 15:23, 2 March 2008 (UTC)


 * There has been a template TALK 17:07, 2 March 2008 (UTC)


 * This isn't a serial comma (comma before final and/or in a list of 3+ items). The case here is just two items. (Or 3 in "A or B or C" format). It was not wanted. See ministerium Robert Ullmann 17:16, 2 March 2008 (UTC)

Normal wiki markup does not work in Wiktionary?
I am trying to put a table with all my accounts in it on every one of my account pages (see, for example, here), but when I try to copy the markup to Wiktionary, it does not work in places.

Please help! It Is Me Here 16:54, 2 March 2008 (UTC)

Edit: I have included two attempts to upload the table in my user page - if anyone knows what the problem is, please tell me! It Is Me Here 16:57, 2 March 2008 (UTC)


 * Apparently since June 2006 userboxes have not been supported at Wiktionary. I don't know whether a change would be supported now. DCDuring TALK 17:04, 2 March 2008 (UTC)


 * Perhaps it is Wiktionary's "no-userbox" policy that you've missed? --Connel MacKenzie 17:05, 2 March 2008 (UTC)


 * See the Draft Proposal Usernames and user pages. There may have been some vote which you could find in the Vote archives. DCDuring TALK 17:11, 2 March 2008 (UTC)


 * I think the relevant text here is WT:NPOV, added pursuant to Votes/pl-2007-08/Babel userboxes (although the actual community decision on userboxes dates back somewhat further than that). I think Connel knows this.  :-) I assume you meant to reply to the OP?  -- Visviva 03:13, 3 March 2008 (UTC)


 * It looks fine to me, except for the redlinks to Userboxbottom and Userboxtop; these templates do not exist on English Wiktionary, and probably never will. -- Visviva 17:06, 2 March 2008 (UTC)


 * Just lose "Userboxtop" and "Userboxbottom". Why you have two nested tables I don't know. Wikimarkup works the same here. (We don't have user boxes because they are a horrendous waste of time, and useless. Look at WP to see just how ridiculous they can be.) Robert Ullmann 17:08, 2 March 2008 (UTC)


 * But wouldn't it make more sense to have the same features in all Wiki projects, as they are meant to be part of the same company etc.? It Is Me Here 07:55, 10 March 2008 (UTC)


 * Not really. Each project makes its own policies based on the unique characteristics of the project and community.  Likewise, templates are created separately at each project, and are only copied between projects when it is advantageous to do so.
 * As you may be aware, the inclusion of userboxes on the English Wikipedia was the focus of serial wheel wars (now known as "the userbox wars") back in 2006. If the WP community had not already tipped irretrievably toward Myspaceism and vapid self-expression by that point, it is quite possible that 'pedians would now have the same policy as we do.  -- Visviva 08:18, 10 March 2008 (UTC)

A bot to update "see also" templates?
Robert Ullmann has demonstrated that it is possible to pluck all the variations for a given string of letters (i.e. with capitalization, punctuation, and diacritics). Could we get a bot to automatically check and update the templates atop all of the corresponding pages, so that, for example, every variation of lan is included in that template for every page that is a variation of lan? bd2412 T 18:39, 2 March 2008 (UTC)


 * The DidYouMean extension is a much more efficient way to accomplish that, but I don't know how to yell at the developers with the right inflection, to get them to actually do it. --Connel MacKenzie 20:15, 2 March 2008 (UTC)


 * But isn't that primarily for errant searches for a word not in the dictionary? If I look up hen, I should come to hen - with a line at the top saying . And if someone adds another variation, say HEN or hen-, this see template should go on those pages, while those newly added terms should be added to all the other pages. bd2412 T 22:10, 2 March 2008 (UTC)


 * Fear not: that's what the DidYouMean extension does. (IMHO it's not a great name, especially since we use a similar name for the JavaScript thing that sends you to yes when you visit http://en.wiktionary.org/wiki/YES, but the name isn't what matters. :-) —Ruakh TALK 23:03, 2 March 2008 (UTC)


 * Then why do we have at all? bd2412 T 07:04, 3 March 2008 (UTC)


 * Originally, I'm guessing was what spawned the idea of the DidYouMean extension. As for right now — the DidYouMean extension isn't live on Wikimedia sites (we need a developer to install it here). And even once it is here, we might still need  for special cases. —Ruakh TALK

How about, instead of updating all the cross-linked entries at once, to have on each a template that will transclude out of some special place (pseudo-namespace, special subpage of an entry with the least of diacritics, or whatever) the list of other entries to be displayed? That way the bot-updated change wouldn't show on the watchlist, and the amount of pages to be maintained would be reduced significantly. And it would be trivial to upgrade it to "See Appendix:Variations of xyz" once the see-list grows too large. --Ivan Štambuk 12:27, 3 March 2008 (UTC)
 * That would work as well - can we make a template that displays all variations except the name of the page it's on? I suppose it wouldn't be a deal-killer if it displayed that one as well, but it would be neater without. <i style="background:lightgreen">bd2412</i> T 05:19, 4 March 2008 (UTC)

[BOT] Populating Citations:*
I had a thought recently which I couldn't figure out how to follow through with, so I thought I would put it here and see if we could, collectively, solve it. The idea is this: originally Citations were to be gathered to offer material from which to generate definitions, like the little slips of paper from days of old. We have been populating these pages slowly and laboriously from books.google.com & Project Gutenberg and other sources, or typing out the quotations from hard copy which we possess. This is a silly thing to do. This particular task is very much worthy of a bot of some kind, a bot shouldn't have any trouble running through a corpus (Project Gutenberg downloaded for example) finding sentences which contain a word, then uploading the citation filling out the template from the meta information on the book. We could, in this way, populate much of the Citations namespace which would allow us to more easily resolve WT:RFV issues as well as give us some lists of English words which we yet need. I am not sure exactly how this would be implemented but I would assume we would look for something along the lines of 'oldest cite', 'newest cite' and then one cite per decade of the words usage or somesuch? Anyhow, thoughts? - 23:31, 2 March 2008 (UTC)


 * I don't think this could work. When I'm looking for a good citation it is nothing for me to throw out 20-100 possibilities on b.g.c. or Wikisource before I find anything remotely suitable.  The thing is, quotations/citations really need to be illustrative, and illustrative of a particular sense.  Writing a bot that would reliably sort citations by sense would already challenge even the most skilled programmers; a bot that could accurately choose illustrative citations is AFAIK well beyond the current state of AI.
 * This is somewhat different from the RfV situation, but in any case it is quite rare for anything that turns up in PG or Wikisource to be RfV'd. Usually something that belongs on RfV requires careful combing of multiple sources, coupled with a human alertness for false positives and false negatives.
 * On the other hand, a bot that would add keyword-in-context (KWIC) lines from a particular corpus or corpora might be very useful, although I would argue (again) that such material should be in Concordance: namespace rather than Citations:, and should be in standard KWIC format, which is very different from the WT:QUOTE standard. Even here, though, big swaths of KWIC lines have been found to be of little value to non-linguists; as I recall, research on data-driven language learning indicates that concordancer output needs to be trimmed carefully by hand (down to 3-5 lines per concept illustrated) in order to be cognitively accessible to the general user. -- Visviva 03:06, 3 March 2008 (UTC)


 * But on the other hand, we might be able to have a bot pull quotations from selected texts at Wikisource. I wouldn't want a bot pulling quotes for the common short words, because we'd be flooded with useless quotes for the, and, of, his, etc., but I don't see why this couldn't work for longer words as a starting point.  The limitations I would want on the first pass at such a project are: (1) Do a single source (or the works of a single author) on a single run -- some authors' works on Wikisource are valuable for words, and others aren't (and so have been translated or edited heavily), (2) Have the bot trained to recognize sentences and paragraphs when clipping for context -- sometimes the necessary context the makes a good quote is in a single sentence, but sometimes a preceding or following sentnce is necessary, (3) Limit quotations to those words more than (let's say) six characters to avoid the common short words, (4) Have the bot add only words for entries that have an English entry, and if there is no English entry then dump the quote in another file so that they can be sorted -- some "words" in literature are half-words, articulations, and other items that would not merit inclusion, and there are likely to be proper nouns of questionable value for inclusion. --EncycloPetey 03:27, 3 March 2008 (UTC)


 * I agree with both comments here. A bot is going to be rather poor at find illustrative quotes, especially for common words.  However, for less common words, it could provide a nice substrate for human editors to fine-tune.  Also, a bot could do an excellent job of charting a word's existence over time.  If you could get your bot to provide a cite for a word over time (perhaps one cite for every 25 year period the word has been in existence), that would be immensely useful.  Also, the bot should, as specifically as possible, cite where it got the reference, so that human editors can tweak the sentence, add preceding/following sentences for context, etc.  -Atelaes λάλει ἐμοί 03:36, 3 March 2008 (UTC)


 * While it would be completely useless for RFV (particularly as b.g.c. has so many scanning errors indexed) it might have a different purpose. I know Hippietrail worked on something a long time ago to pull citations from Usenet.  I could see a "RfquotesBot" that searches for boatloads of citations for uncontested terms tagged with .  Maybe a different approach would be to provide 10-20 links (just single square-bracket enclosed links) on each RFV section (that people could use as starting points) from Wikiquote or Project Gutenberg hits.  But that sounds fairly arduous, with little hope of positive return for the effort expended.  --Connel MacKenzie 04:49, 3 March 2008 (UTC)

We, as a community, have proven over and over again how good we are at finding reasons for not doing something, or reason why a certain thing wont work, and in true wiktionary style you all came through, thanks. I, however, had already thought up plenty of reason why it couldn't work and why we shouldn't do it, so let's focus on the actual question, how can we get a bot to actually do this? I am not talking about getting quotations for the main namespace, a bot would have to be very intelligent to do that, to associate meaning with the context, a little too tricky for us at present. What I am aiming at is the population of the Citations namespace with citations derived from the corpus, exactly what that namespace was created for. Yes, we want to filter out certain 'mention' types of citations, let's figure out how to do that, perhaps we should avoid non-fiction for a while until we do know how to do that, I doubt there would be much 'mention' in, say, Huckleberry Finn, but there would certainly be many thousands of citations derived from a seminal work of the English language, which would be great. We need a lot of citations, lead has 41 English definitions listed, and to meet CFI we need three distinct citations for each sense, which means 123 citations minimum. I don't want to find them by hand, but if there were, say, 200 citations listed on that page from 200 well known and edited works of the English language, UK and US, well, then, I don't have to look too hard to find at least one or two definitions for the sense I am trying to verify. This can be helpful. I don't think we want every citations page to have 200 citations on it, fructoside probably only needs 5-10, seeing as it has only one real definition. I still don't want to go looking for them when I can have a bot do it for me and then just look at the Citations page. Let's forget about the whole inline quotations, with meanings associated, and think more about little slips of paper. We could use some little slips of paper, we created a namespace to hold them, lets get a bot finding some little slips and filling up the namespace so that we don't have to. The statistics page asserts that we have ~260,000 English definitions, each of which should have 3 citations, which means ~780,000 citations. That is a lot of copy/paste/reformat to do by hand. So I pose my initial question again, how can we make this work? - 21:05, 3 March 2008 (UTC)
 * Well, looking at it that way — that is, assuming (for the sake of argument) we're to do it, and merely discussing how to do it — I think, yes, omit non-fiction (because of mentions). Don't use bgc (because of scannos). Use Gutenberg (and the like, but I don't know what's out there) and find the first two times a word appears in each text not in a chapter heading (which can be defined somehow, based on how Gutenberg, or whatever database, displays chapter headings), and include the word and the preceding fifteen and following fifteen words as a citation, with the source being listed as the author and title, with a link to the database. If possible, also include the chapter number; depends on the database, I suppose. For Usenet quotations, make sure that the line you're quoting from doesn't start with a non-letter, nor with {some sequence of letters and numbers without whitespace which is followed by a right angle bracket}, as those signify quotations of previous posts (often). Link the Usenet citation to the Google page, and include author, newsgourp, date, and subject. (Author will be tough, since Google has a CAPTCHA in place to prevent bots from seeing authors' info.) That's just some thoughts.&mdash;msh210 &#x2120; 21:20, 3 March 2008 (UTC)


 * Gutenberg, Wikisource, and Usenet would be great for starters. Still, I'm not sure what a bot-generated set of results could tell me that a quick survey of the various Googles wouldn't.  How many clicks would this actually save us?  ... In this regard, the more meta-data such as author, date, etc. the bot could extract, the more useful this would be.  -- Visviva 14:11, 9 March 2008 (UTC)


 * I think figuring out just Gutenberg would be the place to start, everything there is sorted, downloadable, in a normalized format, edited, and of high quality when it comes to providing citations. -  22:09, 11 March 2008 (UTC)


 * Agreed, Gutenberg is the way to go. I think it might be easier to do individual works, to start out with.  Pick one incredibly important piece of English literature (perhaps Romeo and Juliet or Grapes of Wrath).  Have your bot pick the first occurrence of every word and dump it (and the sentence it's found in) into the citations page of that word.  If your bot can read periods, that shouldn't be too complex.  Do that, and then let the community look at the results and see what we can tweak.  I think that it would be much better to use important works instead of Usenet, which should really be treated as more of a last resort.  -Atelaes λάλει ἐμοί 22:22, 11 March 2008 (UTC)


 * I am thinking it would ignore any capped words, it would only pull examples from sentences greater than 10 words, would grab a sufficient chunk so as to assure context (three sentences, one before, example, one after) we aren't running out of space, so getting three sentences won't be a big deal. Then slap it on the cites page in chronological order.  BAM! -  22:31, 11 March 2008 (UTC)


 * Sounds good to me. I look forward to seeing the results.  -Atelaes λάλει ἐμοί 22:34, 11 March 2008 (UTC)


 * Me, too. —Ruakh <i >TALK</i > 23:32, 11 March 2008 (UTC)


 * Sounds mostly good to me, but I wouldn't trust Gutenberg (or Wikisource for that matter) when it comes to works by Shakespeare. The Bard's plays have been heavily edited and normalized at both sites, without indicating the date of the normalization, which is what is important for considering spelling.  I'd recommend an 18th or 19th century work to begin with, as they're usually not altered at either site.  Something like Wuthering Heights or Oliver Twist could make a good start.  If the Gutenberg version of Paradise Lost maintains the original spelling and capitalization, then it would make a good text as well, as would Gulliver's Travels.  The only other thing I wonder about is how the bot will know what chapter to put if it's drawing on a Gutenberg text.  Gutenberg works are generally a single file for an entire work, whereas Wikisource tend to break longer works into pages according to chapters. --EncycloPetey 01:32, 12 March 2008 (UTC)


 * Why do we need the chapter? Currently we put date, author, title and cite. -  01:53, 12 March 2008 (UTC)


 * Actually, we usually try to put original date, author, title, publisher, publication date, ISBN (for recent works), and page number; this maximizes the chances that someone trying to track down our cite (to confirm it, get more context, whatever) will be able to do so. When we can't get all that, we take what we can, but I really don't like the idea of giving no information about where in the quotation might be found — page, chapter, scene, something. —Ruakh <i >TALK</i > 02:07, 12 March 2008 (UTC)


 * The original date isn't always useful. Giving the original date for a play by Seneca, and quoting an English translation is meaningless.  Likewise, I have seen English "quotes" from the works of Voltaire and Jules Verne, neither of whom wrote in English.  In cases where the text has been significantly altered, the date of alteration is a must, precisely because the original date in such instances will not allow the user to track down the quote. --EncycloPetey 02:15, 12 March 2008 (UTC)


 * Oh, yes, agreed. By "original date" I mean "original date of translation" (where applicable); for example, quotes from the KJV should be dated '&#39;'1611'&#39;', not {&#x7B;ante|{&#x7B;ante|ancient}}}} .  :-)   —Ruakh <i >TALK</i > 03:08, 12 March 2008 (UTC)


 * Dave, most people put in the chapter (or section, act, etc.) as well. This is part of any standard bibliographic citation.  --EncycloPetey 02:15, 12 March 2008 (UTC)

break

 * Presumably if we were pulling cites from Gutenberg we could just link back to the source, also from WikiSource. Unfortunately the best source for linking back to is Google, and they are the worst when it comes to scannos and pulling raw text, but they do kindly highlight the terms...oh well.  As for the dates, I think the publication date of the version being cited is what matters, and that is what I usually use when listing a cite.  I am not sure how often people are going to need to go back and look at the source of the citation, especially if a reasonable quantity of context is included, but I suppose the more information the better, I think a formal {cite} template is in order, especially if a bot will be adding them -  02:27, 12 March 2008 (UTC)


 * Against with strict restrictions. If there's going to be any citations dump of any kind, we need to keep track of it. As a semi-frequent provider of quotations to back up entries, I doubt that a 'bot could do a resaonable job with providing decent quotes. The :Citations namespace is undeniably useful, but bots don't (and won't) have the intelligence to distinguish between good and bad quotes. With a new category (acceptable)/tag (acceptable)/namespace (acceptable, but a long shot) it is workable.


 * I think a lot of 'useless' citations will be filtered out with just a few simple rules, I have started developing rules already, and am open to suggestion, but this is what I have so far:
 * No words in italics or close quotes (quotations of 4 or fewer words I thought). [Preventing 'mention' vs use]
 * No words in sentences of fewer than 10 words. [Preventing inadequate context for significant meaning]
 * No capitalized words. [Preventing names and similar]
 * With just these three rules we filter out >99% by my estimate of the text which wouldn't provide good cites. There may be more filters that other people can think of, or that I will think of along the way.  The other major filter is the choice of input sources.  By simply restricting ourselves to quality well-known works which originate in English, how many of the words used in them will we not want?  I think it is important not to overthink the Citations namespace, we don't need to restrict what is in there for any reason, it is meant to be a resource in verification of senses and in developing new senses, so the more content in it the better from where I am sitting.
 * There are definitely some things for which this bot will be useless, finding citations for multiple word entries and idioms and such will be difficult if not impossible, but we can start of simply and work into complexity. - 19:55, 14 March 2008 (UTC)
 * That isn't to say these are the only rules, it won't be adding every use of every word from every book, I was thinking each word can be cited no more than once per work, and I would be looking to get only one example per decade. - 20:29, 14 March 2008 (UTC)
 * I think the bot should not add citations where the citations: or /citations page already exists. And I feel strongly that it should add a template like .&mdash;msh210 &#x2120; 21:31, 18 March 2008 (UTC)
 * No /citations pages exist anymore, they were all moved to the namespace. As for extant pages...this is a mixed bag.  If the bot can't edit existing citations pages it will be restricted to adding only one citation per term...this wouldn't even cover the CFI.  I do plan on restricting where citations will be added as follows:
 * Max 1 cite per decade. (This would also restrict the bot to only one citation per input work)
 * Pages which are subdivided (i.e. quotes are already sorted by sense) add an "unsorted" section to the bottom of the page and leave all citations there.
 * Max n total citations per term. I have not decided exactly how many cites n will be...it may be based on the number of definitions in the entry or it may be static...
 * I am also planning on using the   template, with additional parameters which are unshown but will mark the citation as created by bot.  The bot will also keep track of itself, recording information about what source texts it has used and what words were cited from each, as well as some other random information.  Once the framework is finished I will be seeking input about all the details before it does any significant work. -  21:41, 18 March 2008 (UTC)

I was wondering which of the following would be best/most useful in terms of what context is included with the usage; a fixed number of words before and after regardless of grammatical structure, ignoring punctuation and paragraph structure or a sentence before, the sentence containing the example and a sentence following, regardless of length. I suppose there could be a hybrid version of these, as well. - 19:22, 30 March 2008 (UTC)

template:reflist
I found a use of template:Reflist. Do we want to import this from WP? Just create our own simpler one that expands to &lt;references /&gt;? or eliminate occurences of this template? All would be valid decisions in my opinion. RJFJR 21:41, 3 March 2008 (UTC)
 * As far as I am aware we do not use references in the main namespace and it should thus not be imported. I am not sure about the Appendix namespace, but I feel that the MW default is probably good enough. Conrad.Irwin 16:36, 4 March 2008 (UTC)


 * The template should not be imported or used here. It was developed under the assumption that a page would have a single list of references.  Here, where there may be several language sections on a page, there may also be several References sections.  As a result, the template has issues that prevent its smooth functioning. --EncycloPetey 02:50, 5 March 2008 (UTC)


 * I'm not sure why this wouldn't be useful in Appendix: space, where these concerns wouldn't normally apply. If we're concerned about inappropriate use in entries, a namespace check could be built in. -- Visviva 14:14, 9 March 2008 (UTC)


 * Because most people aren't doing work in the Appendix namespace, and most that are have no References section on the page anyway. It might have potential use, but so do a thousand other ideas. --EncycloPetey 03:37, 13 March 2008 (UTC)

sanitation
Ok, what the hell am I doing wrong here? Why are all the following levels getting included in the translations box? I am positively stumped. -Atelaes λάλει ἐμοί 20:49, 5 March 2008 (UTC)


 * Ok, somehow I fixed it by this edit. Anyone who can figure out what I did gets a pat on the head and a gold star.  -Atelaes λάλει ἐμοί 22:45, 5 March 2008 (UTC)


 * Nevermind. Conrad.Irwin figured it out (and received the promised reward).  It was a right-to-left character.  Sorry to waste everyone's time.  -Atelaes λάλει ἐμοί 22:50, 5 March 2008 (UTC)

Talking to a user at a dynamic IP address
In the last coupld of days a user has been adding metric conversions to many articles - e.g. by adding (25 mm) after a dimension in inches. I assume that these are meant in good faith, and some of them are useful. Some are inappropriate (because the definition has to be in inches etc) and some are downright wrong (in the middle of citations). However, the user gets a different IP address for almost every edit (they all start 61.18.170). This means that we cannot leave a message on his talk page - he would only see it one time in 256. Other than having 256 identical talk pages, or 255 redirects and 1 talk page, can anyone think of a way to talk to him. All I can think of is a short duration block on the IP address range - but if it is too short he will not see it, and if it is too long he won't be able to edit for a while. Any ideas? SemperBlotto 09:55, 9 March 2008 (UTC)


 * Perhaps a medium-term block, with a note that you/we will lift the block once the user confirms that s/he has read and understood the note? Not ideal. -- Visviva 14:17, 9 March 2008 (UTC)


 * We've had a similar problem with a user who has often edited the WOTD entries. He/She specializes in adding Polish translations and IPA pronunciations.  The problem is that the IPA is wrong at least 20% of the time. I've left messages, but the user takes a different IP address each time, and I've never gotten a response. --EncycloPetey 01:24, 12 March 2008 (UTC)

how do you force a comma between context tags?
In the dungeon entry, the first definition is
 * The main tower of a motte or castle; a keep or donjon.

What is meant is that this was the original sense, but it is now obsolete, i.e. (originally, obsolete). Without the comma it is saying the word was originally obsolete (is this possible?) with the implication that it no longer is.

What syntax is needed to force the comma? Thryduulf 11:39, 9 March 2008 (UTC)


 * You could reverse them, to give . I think part of the problem may be, which isn't very clear.  Maybe it could say (original meaning)? -- Visviva 13:54, 9 March 2008 (UTC)


 * Also, it seems that was originally meant to be used in labels reading (originally XYZ) rather than to mark the original sense.  This use is still found in cases like Shipshape and Bristol fashion, but has largely been replaced by the other use, despite the ugly trailing space thus generated.  IMO we should really have separate templates for these two very separate uses.  Note that  suffers from a similar confusion. -- Visviva 13:59, 9 March 2008 (UTC)


 * Yes, somehow the different meanings need to be disambiguated. It should really be something like, but that is a lot of syntax. If all uses of "originally" in this sense also mean obsolete, one might make a template that says that. (as you say, have two.) Also usually. Robert Ullmann 14:15, 9 March 2008 (UTC)

: Problem with multiple mention of same term
Have a look at dag: because ‘slang’ is used twice in, there is a template loop. How to solve this? H. (talk) 14:32, 11 March 2008 (UTC)


 * It is annoying that MW won't allow a few levels of recursion before complaining; but I can see where that would lead to lots of trouble with people who don't care how much is generated as long as the result "looks right".


 * If this particular case happened a lot, the standard trick is to create Template:slang 1 that redirects to slang, and use it the 2nd time. (Likewise 2 and 3 ... this is what context does.)


 * What I did was to write slang so it won't match the template name the second time. This is a kludge. (It would also work for the case above)


 * The word is only derogatory in New Zealand? It might be clearer to write.


 * There is a small voice telling me there is a better way (than the nowiki thing), but I don't have it yet ;-) Robert Ullmann 15:01, 11 March 2008 (UTC)


 * Works fine now, tweaked {context} a bit. Robert Ullmann 14:26, 19 April 2008 (UTC)


 * Could this be related to the recent problem with the label on sense 21 of head? Maybe something to do with arguments containing wiki markup?  -- Visviva 06:09, 20 April 2008 (UTC)

Old-revision warnings.

 * MediaWiki:Revision-info&#x3A;:

Revision as of $1 by $2
 * wikipedia:MediaWiki:Revision-info&#x3A;:<div id="viewingold-warning" style="background: #FFBDBD; border: 1px solid #BB7979; color: #000000; font-weight: bold; margin: 1em 0 .5em; padding: .5em 1em; vertical-align: middle; clear: both;">This is an old revision  of this page, as edited by $2 at $1 . It may differ significantly from the current revision.

<dl> <dt>MediaWiki:Editingold:</dt> <dd></dd> </dl>


 * wikipedia:MediaWiki:Editingold&#x3A;:<div id="editingold" style="background: #FFBDBD; border: 1px solid #BB7979; color: #000000; font-weight: bold; margin: 2em 0 1em; padding: .5em 1em; vertical-align: middle; clear: both;"> You are editing an old revision of this page. If you save it, any changes made since then will be removed.

I can't say I'm a fan of Wikipedia's pink — it's kind of ugly, and it's better at grabbing your attention than at being readable once it has it — but surely we can do better than we're currently doing. (Note that Wikipedia even has id=""s and everything so users can CSS-customize their warnings to be toned down — and with revision-info, it has a special viewingold-plain div that's undisplayed by default so Wikipedians can CSS-customize and get the text we have.)

—Ruakh <i >TALK</i > 03:22, 12 March 2008 (UTC)


 * Well... I had a go, you can now edit the styles at the bottom of MediaWiki:Common.css, please feel free to fix the ugly clashy colour combination that now results. Conrad.Irwin 23:03, 12 March 2008 (UTC)


 * O.K., it took me a while to figure out what you did, but now that I understand, I think it's great. Thank you. :-)  —Ruakh <i >TALK</i > 22:27, 16 March 2008 (UTC)

finding a word without knowing its spelling
I've noticed that several of the feedback suppliers have noted that, if they don't know the spelling of a word, they can't find it. I think that this is a serious issue, one that needs desperately to be addressed, and a drawback — until it is addressed — of Wiktionary vis-à-vis a paper dictionary. They've come up with at least two suggestions, and I'll use English as an example: The second suggestion sounds great in theory, but I have no idea whether it's feasible; the multilinguality of Wiktionary would make it even harder, I suppose (since, e.g., houille may sound like oui in French but like who'll in English). And the first suggestion would require too much maintenance, I suspect.
 * On each English page, link to the alphabetically next and alphabetically previous English words; thus, as in a paper dictionary, people can "flip through" entries if they don't see what they want.
 * Have special:search have a capability for sounds-like searching (e.g., Soundex).

But we must find some solution.

I think that having a bot populate every language's index page with every word in the language (that we have) is a good start. If we then link more prominently than we now do to the indices, people will know where to look for an alphabetical list of words; this will allow them to find a word even if they're unsure as to its spelling. I doubt it would be hard to have a bot populate the indices. It can either
 * start with existing words, and then add new ones in real time (say, after the L2 has been in place for 24 hours) or
 * add from each dump.

There'd be the issue of removing entries: we'd want the bot to remove entries it had added, which turn redlinked; would we want it to remove other redlinks? I guess not: in which case it'd be, I'm guessing, too much work to have the bot remove its additions. But anyone deleting an entry is supposed to check for links into the page anyway, so can remove the index entries.

These are some of my thoughts on this — in my opinion — important issue of Wiktionary's usability. Please propose further ideas, or comment on mine.&mdash;msh210 &#x2120; 16:45, 12 March 2008 (UTC)


 * I agree with the need to do something. When I'm unsure of a spelling of a word I want to look up, I run it through Google first - the "did you mean" option there is usually a good way of finding the correct spelling (it doesn't always work though). I presume that it has a dictionary that it looks through and selects the closest match?
 * One good way to populate suggestions for correct spellings is to look at the types of spelling error people make. Probably my most frequent mistake is to use a double letter where there should only be a single and vice versa. I also make -ely/-ley, -y/-ey and -aly/-ally errors. There is probably some research about this somewhere. Thryduulf 17:16, 12 March 2008 (UTC)


 * I wholeheartedly agree that we need to accommodate users who make mistakes, whether those mistakes are slips of the finger (typos) or slips of the mind, eye, or ear. Soundex and other tools for extending the search and some easier way to users to get back to search if they didn't find what they wanted (e.g., to see whether what they typed is what they wanted to type) and to change it seem like wonderful ideas. But isn't this something that would work at the Wikisoftware level? I understand that there is some modest expansion of the developopment staff at WMF. Do we have some kind of wishlist? OTOH, if there is something that "we" (an ironic we, since I have little to contribute technically) can do without taxing the meager resources at WMF, that would be wonderful. DCDuring <i >TALK</i > 17:20, 12 March 2008 (UTC)


 * I think that the best solution to this problem is an index. Here's what I'm envisioning: User types in a word (the spelling of which they are unsure of).  They get to a blank page or the wrong page.  So, they click the little Index button on the right, and an index slides out, showing them the preceding 30 words and the succeeding 30 words.  Now, the user knows that these are the proximate words on a multilingual dictionary, and so many of them will not be in the language they are looking at (because the index tells them so, in no uncertain terms).  So, since they are looking for a French word (and who can ever spell French words?  Half the letters are silent!), they switch the setting on the Index to French, which pulls up the preceding and succeeding French words (which are run out of the French index, which is bot updated after every dump).  The user finds the word, and is so overjoyed that they petition the United Nations to fund Wiktionary so we can all quit our jobs and get paid to do this.  -Atelaes λάλει ἐμοί 17:55, 12 March 2008 (UTC)
 * As you know, we already have indices by language and a general index; perhaps it would be simplest to populate the former and have links from nonpages to each (perhaps to  and index:All languages).&mdash;msh210 &#x2120; 22:32, 12 March 2008 (UTC)


 * Another solution (not exclusive of any other approaches) is to set up a page with tips for ways to find words when you don't know the spelling, or even if you can't quite remember the word. One way I do this is to search for related words, such as synonyms or words likely to appear in the definition.  --EncycloPetey 18:10, 12 March 2008 (UTC)


 * See Grease_pit below, as this addresses the same issue.--Richardb 00:50, 6 April 2008 (UTC)

Preventing collapse of tables during preview
Does anyone disagree with me that preventing tables from collapsing during preview would be a good thing? This may require a modification to MediaWiki because there is currently no good way to distinguish "Show Preview" from "Show Changes" in Javascript. The  variable in both cases has the value of   and I can't see anything else besides the presence/absence of   to distinguish them. This probably limits anything we can implement right now to work only in Monobook, but I think even that would be a big usability improvement. Having tables collapse while you're trying to edit them sucks (beyond the debatable suckiness of having them ever collapse). Mike Dillon 03:23, 13 March 2008 (UTC)


 * (1) There was a brief period, not very long ago, during which tables didn't collapse during preview. (For me, at least; I don't know what other people saw.) I don't know what the story was, but I was sad when it went away. (2) It seems that a "Show Changes" page wouldn't include any tables that needed to be collapsed, so I don't see the need to distinguish them: if you just leave tables uncollapsed whenever  is , that should have no effect at "Show Changes", right? —Ruakh <i >TALK</i > 03:29, 13 March 2008 (UTC)


 * Actually, it would show all tables in the section being edited, even if only one of the tables is of interest to the edit in question. I like previewing the table in collapsed form because it allows my to verify that the syntax of the edit is correct. --EncycloPetey 03:35, 13 March 2008 (UTC)


 * Perhaps this should be a preference? I imagine (but obviously could be wrong) that most editors would prefer the tables to be open during previewing, but there's no reason editors who prefer them closed shouldn't be able to set that. BTW, if the tables are collapsed, then you don't actually verify the syntax; before Visviva created the redirect, there aren't numbers for how many times I tried to use instead of . It worked fine until I opened the table. ;-) —Ruakh <i >TALK</i > 03:44, 13 March 2008 (UTC)


 * I just took a closer look at the code and it seems to avoid collapsing if you're previewing an edit to the Translations section on its own. If you click the main edit link or any other header, the collapsing boxes are collapsed. And you are right about "Show changes" too; I didn't think it through entirely. P.S. The code is in MediaWiki:Monobook.js, so all of this only applies to monobook anyways. Mike Dillon 06:20, 13 March 2008 (UTC)


 * Oh! That makes sense. I guess I don't usually preview just a translations section on its own. Now that I know that's how it works, though, I guess I'll start doing that. Thanks! :-) —Ruakh <i >TALK</i > 12:14, 13 March 2008 (UTC)

Categorise entries with the template
At the moment I'm spending much of my time on Wiktionary going through special:UncategorizedPages. A significant proportion of the entries are Italian verb forms, most of them using the template.

For example affiancandoti -

It should be possible for this to be categorised automatically to category:Italian verb forms, or ideally more specifically (as the category is hugely overpopulated). Thryduulf 23:31, 13 March 2008 (UTC)


 * That template call doesn't provide enough information to know that it is a verb form, unfortunately. Since the first parameter (form) is free-form, you'd have to special-case a lot of stuff to figure out the right category. Mike Dillon 23:50, 13 March 2008 (UTC)


 * Uh yeah... the template never uses the word "verb" anywhere, which makes this difficult. The solution I came up with for Latin was to get categorization from the inflection line, like we do for other entries.  So, we have  which categorizes the entry as a Latin verb form.  Doing something similar for the Italian verb inflections should be trivially easy, since it would mean replacing the inflection line only, and for the Italian verb forms, that should always match the pagename. --EncycloPetey 23:51, 13 March 2008 (UTC)
 * &mdash;msh210 &#x2120; 16:56, 17 March 2008 (UTC)
 * Yes, exactly. And thanks for expicitly writing out the template with arguments; I really should have done that myself when I posted before. --EncycloPetey 17:39, 17 March 2008 (UTC)

Edit counting
Can anyone recommend an edit counter that works well with Wiktionary? I've used wannabe kate for a long time, for both Wikipedia and Wiktionary, but it's gone sort of kooky lately for Wiktionary counts (although it's still OK for Wikipedia) and w:User:Interiot, who used to maintain this tool, has now drifted away from the project. -- WikiPedant 18:16, 14 March 2008 (UTC)


 * It is now integrated into your user Special:preferences for no very good reason. Conrad.Irwin 19:30, 14 March 2008 (UTC)


 * I use http://tools.wikimedia.de/~river/cgi-bin/count_edits. —Ruakh <i >TALK</i > 23:11, 14 March 2008 (UTC)


 * Thanks, guys. I seldom visit Special:preferences and hadn't noticed that the edit total is there.  Nice touch, actually.  The tool at wikimedia gives more info and is just what the doctor ordered.  Thanks, Ruakh.  I'll add a link to it on my user page.  -- WikiPedant 01:30, 16 March 2008 (UTC)

Converting redirects to script templates
(re discussion at WT:ID)

User Pistachio set out to convert some of the XXchar template names, now redirects, to the ISO script names and tags we are using. It was suggested that this would be useful for AF to do; particularly as people will continue using the old names for a fairly long period. (That is just what AF does, fix little things so people don't have to do all the picky stuff perfectly; it still fixes things like "cattag" to "context", and "Wikipedia" to "wikipedia" which of course 'pedians will persist in forever ;-)

There are several cases, direct use of the template, use with sc= in various templates. (And others?) I added code to AF to do this (entries in its general regex table). The question is exactly what set of existing redirects can/should be automatically done. Some may have issues with exactly how they have been used.

All of the first column should be redirects (in alpha order), the second column the script codes, with language tags (only when needed). Please add to/edit this table (in place).

If there is an issue with one of these, please note. Robert Ullmann 12:13, 15 March 2008 (UTC)


 * Using the entries above (less SDChar) seems to be working properly. See koala, talk, etc Robert Ullmann 17:49, 15 March 2008 (UTC)

I hate to criticize an approach when I don't have a better idea to offer, but I don't really like the whole fa-Arab: etc. thing. fa-Arab denotes Persian, as written in the Arabic script; it does not denote the Arabic script, as used for Persian. Using it as a script template doesn't make great sense, since ideally, we'd want it available as a language code. —Ruakh <i >TALK</i > 04:56, 16 March 2008 (UTC)


 * I also came across being used for Azeri (at one:). Not sure if it muddles the issue even more to use  for a language other than Persian. Mike Dillon 05:32, 16 March 2008 (UTC)


 * I'm not sure; strictly it should be az-Arab (if not Arab), but using a related language that way might be reasonable. Of course in that case, simply redirecting az-Arab to fa-Arab would probably be better. So this is either to be considered a simple error, or just okay ;-) Robert Ullmann 12:44, 16 March 2008 (UTC)


 * Azerbaijani alphabet is a pretty interesting read. It looks like the language has been written in Arabic, Cyrillic, and Latin scripts and that the Arabic variant used is "Perso-Arabic" (aka fa-Arab). Mike Dillon 05:54, 17 March 2008 (UTC)


 * See IETF language tag - format is 4646 compliant ^_^ And means exactly what you said: Persian language written in Arabic script. --Ivan Štambuk 05:37, 16 March 2008 (UTC)


 * He is saying that {fa-Arab} would be a language name template like . But we don't need or want that as we don't distinguish languages by script (e.g. in L2 header), what we want is to know what fonts to use, and what language tag to give to XHTML (which in this case is "fa-Arab"). The ISO script and language codes are designed for use together in various applications; in this particular application (web rendering), we need the tags for HTML/XHTML: (which we don't use much, mostly specifying fonts that work well, but that is what is used for full browser-side font selection, which FF does but IE doesn't AFAIK) Robert Ullmann 12:44, 16 March 2008 (UTC)


 * This reminds me of something. Template:Cyrl is currently using a class of "RU" even though it is used for languages other than Russian. While this is just an internal cosmetic issue with the name we use to pick the right CSS rules in MediaWiki:Common.css, the fact that it doesn't specify a lang attribute is more of a real issue. Of course, it can't use "ru-Cyrl" or "ru" since it is used for all languages that use Cyrillic, but I was thinking that we could start passing a language code to script templates that are used for more than one language. This would allow us to construct  in cases where the language is available. In combination with Template:lang2sc if we ever start using it, I think this could greatly improve the self-descriptive quality of our markup. Mike Dillon 16:22, 16 March 2008 (UTC)


 * Along these lines, very few of the script templates, if any, are really limited to a single language... Even things like Template:Jpan, which seem clear-cut are not necessarily so... Okinawan (ryu) and Ainu (ain) both use some mix of Kanji and kana scripts, but neither of them are "Japanese" (ja). Mike Dillon 05:52, 17 March 2008 (UTC)

Enhanced patrolling no longer works?
Since a few days, I no longer see the (mark) links in Special:Recentchanges. Is this function switched off, or am I doing something wrong (checked WT:PREFS a few times already. H. (talk) 21:06, 15 March 2008 (UTC)


 * This was just fixed by Connel yesterday (TheDaveRoss screwed up some things). If you refresh your cache on User:Connel MacKenzie/patrolled.js, it should all come back.  -Atelaes λάλει ἐμοί 21:10, 15 March 2008 (UTC)
 * Sure blame me! I blame javascript for sucking. -  00:09, 16 March 2008 (UTC)


 * Blech. Is there an easy way to make the popup confirmation disappear?  It serves no useful purpose that I can see, and slows one down enormously when marking a batch of revisions (as for example when an anon has made 20 edits   to an entry that has been brought up to code).   -- Visviva 15:35, 18 March 2008 (UTC)


 * I'm not currently getting those popups, so maybe someone's fixed this now? If not: when I did use to get those popups, I'd middle-click instead of left-click, so that Firefox would open the "now marked patrolled" page in a new tab while leaving me at the recent-changes page. Very effective; you can open a ton, then clear them all out. If there's nothing else in the window, you can even right-click on the recent-changes tab and select "close other tabs" to close all the "now marked patrolled" tabs at once. —Ruakh <i >TALK</i > 23:48, 18 March 2008 (UTC)


 * A recurring theme is that the wording of those prompts in WT:PREFS is counter-intuitive. The 'Patrol in "expert mode" with no alerts' gets rid of those annoying verification things.  Suggestions on how it should be reworded are welcome.  --Connel MacKenzie 15:39, 25 March 2008 (UTC)

Edit a protected template
Template:t has a minor display bug, which I documented and tested at User:Mzajac/test. The bug-fix is at User:Mzajac/t. I can't apply it because the template is protected. I posted a request at template talk:t, with no response. Would someone please fix the template? Thanks. —Mzajac 16:30, 17 March 2008 (UTC)


 * The fix has been applied, thanks. —Mzajac 16:48, 18 March 2008 (UTC)

Missing tab in Citations
Articles in the Citations namespace do not have a tab at the top to get back to the article (unlike Talk pages). There is normally a link near the top of the text, but not always. SemperBlotto 16:51, 17 March 2008 (UTC)


 * Looking at the code, it looks like this behavior used to be there, but it was lost when Citations became a real namespace. Mike Dillon 18:16, 17 March 2008 (UTC)
 * There should be a real tab; and a real tab from the entry to the citations:. But as a temporary measure, we should have a JS tab back, just as we have a JS tab forth.&mdash;msh210 &#x2120; 18:22, 17 March 2008 (UTC)


 * Conrad has just added a fix for this, please let him know if you notice anything erratic or problematic about it. - 16:33, 29 March 2008 (UTC)

wikiversity template required
Would someone like to create a template. I have used it in metacommunity as there wasn't a Wikipedia article. SemperBlotto 11:42, 18 March 2008 (UTC)
 * Should be done now, let me know if I need to tweak it. Conrad.Irwin 12:28, 18 March 2008 (UTC)

Interwikis and ta.wikt (Tamil)

 * Moved to Beer Parlour Robert Ullmann 14:49, 22 March 2008 (UTC)

Customizing t, t-, t+ templates
Hippietrail has added colours to the CSS for the templates, making them 3 different colours with tweaks for visited/not visited, you'll see them whenever your cache gets that update to Mediawiki:common.css. He did tie them to the superscript tags, which makes it impossible to customize to remove the superscripting.

I've taken it a bit farther, adding classes that allow a large range of customization. Except on IE, which doesn't implement CSS 2.1 unlike every other browser out there which does ... Even IE8 doesn't do it! (We're Microsoft, We don't have to. ;-)

The new template code is in, , ; it can't be substituted into t etc until everyone has the new common.css; that takes about a month. So no hurry, take a look ;-)


 * t, FL unchecked:
 * t-, FL nonexistent:
 * t+, FL exists:

On all but IE, this should look the same as you see now with {t}. IE loses the parentheses, at any version.

On IE, one can customize the colours, font, remove the superscripting, change font size. Or make the links disappear.

On all other browsers one can:
 * do the above, plus
 * change the parentheses to square brackets, or anything else, or make them go away
 * change code to a symbol, either different for t/t-/t+, or the same

So just about every suggested alternate form during the lengthy discussions is supported, except displaying the language name.

You can copy the code from common.css (the second block re t templates) and play in your own monobook.css. The trick for a symbol is to use :before with .tpos, .tneg, .tunk, and display:none with .tlcode to make the language code disappear. All other things are bog standard application of styles to those classes.

The above should look just like {t} does now (except on IE, no parentheses), if you have the CSS. If you see anything else (you may have to wait for both server-side squid cache and your local ISP ...) then please tell me. You can copy the relevant block into your own monobook.css to see it right away. Robert Ullmann 08:54, 21 March 2008 (UTC)


 * The following shows first {t new}, then {t}, for the 3 variants:


 * t:, chat
 * t-: ,
 * t+: ,


 * please tell me if these look different in your browser (s/w and version, platform and version), other than IE not showing parens. The font size may not be quite right. (75% now)


 * hippietrail says the the underscore (shown while hovering) is on the baseline in FF 3.something (dev/beta version), this is wrong, but FF 2.0.0.1 (present release) gets it right, so it ought to be fixed ;-)


 * now FF 2.0.0.13 isn't showing the hover underlines ... Robert Ullmann 11:09, 26 March 2008 (UTC)


 * if the disappearing parens in IE is a problem, it can be worked around (with more bloat, of course). Any opinions? Robert Ullmann 11:42, 22 March 2008 (UTC)


 * Safari 3.1/Mac also shows all of the :hover underlines on the baseline, but I don't think this is a major problem. It will be nice to customize these. [note: the example above shows only the new templates] —Michael Z. 01:45, 26 March 2008 (UTC)


 * Um, kind of silly to show new and new, sorry ... I'm working through combinations to see if I can get something that keeps the parens in IE, allows as much customization as possible, and minimizes glitches (for example, as I have it now, if you use display:none to suppress the links, you still get an extraneous &amp;nbsp; which is no good) Robert Ullmann 11:05, 26 March 2008 (UTC)

I'm messing with the new templates for a little while, you may see oddities ;-) Robert Ullmann 11:38, 26 March 2008 (UTC)


 * wow, IE sure has trouble complying with standards doesn't it? They (of course) don't even try. vertical-align:super and the sup tag are defined as doing the same thing. Except for IE. I've been spoiled by FF; I look at sites in IE now and they look like crap...


 * I have a consistent combination of t new and common.css in place again now; you might need to refresh. The parens show in IE, but FF can still customize, like doing [de] and the like. Robert Ullmann 12:12, 26 March 2008 (UTC)

Please see Template talk:t ... I think I've got the glitches covered. IE shows the parens, but in any browser you can turn them off, in any-non-IE browser you can change them to [ ], the space before will go away if the link is supressed (and is a bit thinner by default ;-). Hover underscore should be correct, and in the correct colour. Robert Ullmann 13:58, 26 March 2008 (UTC)

A change in Template:audio ?
In this template, I think it would be helpful to the reader to change the superscript word "file" to "play." "File" has many meanings and, in this context, provides no direction to the reader as to what he should do in order to hear the word. But, seeing the word "play," the reader would immediately know that he should click there to hear it. This would be especially helpful to first-time users of Wiktionary.

The template is locked, so an admin would have to do this. Wahrmund 19:29, 23 March 2008 (UTC)


 * The file superscript does play the file; it goes to the media description page. The main link plays the file. Mike Dillon 22:10, 23 March 2008 (UTC)

I understand that the superscript plays the file. I am only suggesting that the label which invokes the routine be changed to "play" so that the reader will be directed to it and will know he should click on it. Everything else would remain the same. Wahrmund 02:29, 24 March 2008 (UTC)


 * I meant to say "doesn't". The main link plays the file. Mike Dillon 02:39, 24 March 2008 (UTC)


 * Perhaps then "file" would better be changed to "info" or "information"? Thryduulf 02:42, 24 March 2008 (UTC)
 * Why change it? It only takes the user to information if the file exists. When the target file does not exist, it allows the user to upload an audio file to Commons. --EncycloPetey 03:02, 24 March 2008 (UTC)


 * I expect, when I click on a link that says "file", to be downloading a file. I agree that a change to "info" would be a good move, and less confusing for those who think the link is actually to the file - which it isn't. Conrad.Irwin 16:45, 25 March 2008 (UTC)


 * But if we change it to "Info", then in cases where the link is red (because the target doesn't exist yet), the user will not get the expected info. The "file" link taked the user to Commons, which is where the file is located.  I always play the file by clicking on the speaker icon, which is the logical way to play the file. --EncycloPetey 23:48, 25 March 2008 (UTC)

Both the entry word and the superscript word file can be used to play the pronunciation of the entry. If the user clicks on the entry (and is using IE7), he will get a window which asks "Do you want to open or save this file?"

Alternatively, if the user clicks on the superscript file, the file-image page opens on Wikimedia Commons. On this page is a large button labelled "Play sound." By clicking on this button, the user can immediately play the pronunciation via streaming audio. Clearly, this is the better alternative—it is quicker and easier for the user. And therefore he should be directed to click on the superscript, which can be done by changing file to play.

Clicking on the entry word does give the user the option of either Opening or Saving the file, but I think very few Wiktionary users would be interested in cluttering their computer with a bunch of little one-word sound files. So this option is relatively unimportant, and in any case it would still remain available.

I am still hoping that this can be done. Wahrmund 19:41, 25 March 2008 (UTC)


 * The thing is that for the majority of Wiktionary audio files (being only a couple of seconds long) it is actually quicker to "Open in Media Player" (or whatever is installed on the system) than to fire up the Java runtime and set up the streaming process, it is certainly quicker to download and stay on the same page than to go to commons, start streaming, come back from commons. I think, from a usability point of view, that clicking on the "speaker + word" implies "listen", but I would like to remove (file) from the superscript, as I agree that does sound as though it is a download - perhaps it should be (info) or (source) instead. Conrad.Irwin 20:11, 25 March 2008 (UTC)


 * I have never noticed that the streaming audio is perceptibly slow. Maybe it is a bit slow if the user has
 * a phone connection. Changing to info doesn't seem good, as there is already a help superscript link that
 * is part of the template. I think few users would be interested in reading the details of the source file -- they
 * just want to hear the pronunciation as quickly as possible. Wahrmund 20:55, 25 March 2008 (UTC)

Frequencies into tables
Could someone get this table from this page into WT-able form? It is useful as an alternative for French frequency lists/1-2000, because it contain just the "pure" forms of the word. -Keene2 20:33, 23 March 2008 (UTC)


 * Are you sure that is not copyrighted? The bottom of the page says

Direction générale de l'Enseignement scolaire - Publié le 01 septembre 2002 © Ministère de l'Éducation nationale
 * If we can use it, just copy the source of the table into [url=http://www.uni-bonn.de/~manfear/html2wiki-tables.php this converter], then copy the output. Thryduulf 21:02, 23 March 2008 (UTC)


 * How does one find the source of the table? Keene2 22:07, 2 April 2008 (UTC)


 * That depends on your browser, but usually you can do [View > Page Source] or [Page > View Source] or the like. —Ruakh <i >TALK</i > 23:07, 2 April 2008 (UTC)


 * If it isn't copyright you can find it at French frequency lists/other, if it is copyright you can find the delete button at the top of the page :p - 02:43, 3 April 2008 (UTC)

Deleting old redirects from Conversion script
Um, three technical notes:


 * 1) It had been deleting 5 every 45 minutes, but sometimes was still flooding the deletion log and RC if no-one else was doing anything, so I spent a an hour a while ago adding code to read a few hundred lines of each log and adapt the timer to not exceed 5% of RC or 50% of DL ... after running for a while the timer settled down to ... wait for it ... 47 minutes. Oh well. It will stall longer to avoid flooding during lulls. (Afternoon my time seems very quiet for everyone else.)
 * 2) I forgot the Talk page redirects. My excuse is that I usually download the XML without them (and without user pages, etc). I am fixing this.
 * 3) msh210 pointed out that there were also a bunch of bogus redirects from the move script that ran when we got new namespaces in June 2006, see this and this. (Not sure how that happened, those moves are not supposed leave redirects, or else delete them?) And since I already had 99% of the code, could I do that? ;-) So I added a couple of lines for that too. Also a few Talk:Appendix:... and Talk:Concordance:, these were moved by Connel. (and my code is not set up to differentiate different reasons for redirects created by users who are doing a lot of different things ;-)

Just to document what is going on. Robert Ullmann 15:14, 25 March 2008 (UTC)


 * I was under the impression that NS:0 redirects were the only ones prohibited - and the only ones desired to be removed by this bot. Talk page redirects get swept up as orphans, no?  As long as they are only redirects, it isn't a big problem - but please don't expand scope of this without a little more deliberation.  The namespace initiations have each had minor problems - Brion's init script left a few things behind each time.  I have no objection to Transwiki redirect removal - but please leave Appendix: redirects alone, as they are referenced from 'pedia.  --Connel MacKenzie 15:48, 25 March 2008 (UTC)


 * (Hi! get your computer back from the kids?). The 'pedia references things like Talk:Appendix:Emoticons? Moot point anyway as all of those are left from your moves. It is intended to only shoot things like Talk:Transwiki:Avuncular, which the move script shouldn't have left behind.


 * (No, but I've fixed a couple older computers that can load WT:GP in slightly less than five minutes. --Connel MacKenzie 16:51, 25 March 2008 (UTC))


 * (to be clear) I'm not talking about the Transwiki: and/or Appendix: namespaces, I'm talking about junk in Talk: (namespace 1) that starts with Talk:Transwiki: Robert Ullmann 16:38, 25 March 2008 (UTC)
 * OK. But I still view that as an expansion of scope/separate sort of task.  I agree they can go.  --Connel MacKenzie 16:51, 25 March 2008 (UTC)


 * It is a separate task, but the code is identical ;-) so it makes sense to just include them in the delete tasks in practice. Robert Ullmann 18:49, 25 March 2008 (UTC)


 * What "sweeps up" talk page redirects? And I would think if we are deleting Xyz -> xyz from a CS move, we should be deleting Talk:Xyz -> Talk:xyz left from the same move, no? Robert Ullmann 16:15, 25 March 2008 (UTC)


 * There are 4030 orphaned redirects in Talk: space (as of 20 minutes ago). Nothing is sweeping them up, Talk:Ampitheatre has been orphaned since January 2005 ... Robert Ullmann 16:27, 25 March 2008 (UTC)


 * Checking my code, I specifically skip redirects for no apparent reason whatsoever. On the next pass, I'll re-add them.  The orphan sweeps I do after XML dumps is what I was talking about.  --Connel MacKenzie 16:51, 25 March 2008 (UTC)


 * I see. But I think it is good for the automation to clean up the NS:0 redirect and the corresponding NS:1 at the same time? (or do you want to do it separately for some reason? (we can then yell at you for flooding RC and DL with a few thousand deletions ;-) My code as modified now will look at the redirect and the talk redirect one after the other (running through the ones it has already done first of course) Ok? Robert Ullmann 18:49, 25 March 2008 (UTC)

Pony / Feature Request: linking to a specific definition

 * Yes, It has been suggested before - but nothing really came of it. The problem is that all the defs are very close together so it is hard to distinguish. was my idea. I think we should reopen the discussion. (Sorry for removing your green but it made me queasy) Conrad.Irwin 00:19, 27 March 2008 (UTC)


 * {| style="background:#f7f7f7; padding:2px 2px 2px 2px;" border="0" cellpadding="0" cellspacing="0"

Or, alternatively, if I downloaded MediaWiki and figured out the list-modifying code, how would I go about getting it into Wiktionary? Would I have to get in into MediaWiki itself and wait for it to come downstream? --Struthious Bandersnatch 08:06, 27 March 2008 (UTC)
 * That's why I think highlighting the background of the targeted definition might be the way to go. If someone with familiarity and access to the code that formats ordered and bulleted lists can put the anchors in I'll take a crack at the javascript.
 * That's why I think highlighting the background of the targeted definition might be the way to go. If someone with familiarity and access to the code that formats ordered and bulleted lists can put the anchors in I'll take a crack at the javascript.
 * }


 * Yes you would, and it would be no easy task even if you are a very skilled PHP programmer. The devs don't spend a lot of time on Wiktionary issues, I'm afraid.  But IMO a bigger problem with this approach is that definition numbers are inherently unstable, and always will be; definition 6 one day may not be definition 6 the next.  This is why we normally use glosses rather than numbers when referring to a specific sense within the entry (for example, in translation tables).  -- Visviva 09:28, 27 March 2008 (UTC)
 * If we used glosses rather than definition numbers for the anchors this wouldn't be a problem, although they could get unwieldy. Thryduulf 10:42, 27 March 2008 (UTC)


 * {| style="background:#f7f7f7; padding:2px 2px 2px 2px;" border="0" cellpadding="0" cellspacing="0"


 * Well that sure makes it sound challenging! Unless it's total spaghetti code I would've thought there would just be a few loops (or declarative-style recursion iteration if it's done via templating) where parsing and formatting lists would be handled.  I'll have to take a look when I get a chance.
 * Well that sure makes it sound challenging! Unless it's total spaghetti code I would've thought there would just be a few loops (or declarative-style recursion iteration if it's done via templating) where parsing and formatting lists would be handled.  I'll have to take a look when I get a chance.

As far as definition numbers changing, that had occurred to me but it's no different a problem than someone changing a section header in an article, or any other anchor, for that matter. We live life on the edge and take big risks here in the Wiki world. ;^) --Struthious Bandersnatch 17:00, 27 March 2008 (UTC)
 * }
 * Linking to specific meanings (that's what languages really are, the set of meanings, not particular sequences of symbols we call lexeme: as the usual definition is) is one of the most important feature MedaWiki software is missing. Now you have to both add translation under ====Translations==== section + create new entry manually with exactly the same gloss, where it should be done all at once (in either direction). And the topical categorization could be fully-automated (just add lang= into FL context label provided that the English meanings have context labels properly set). Also ideally, meanings should be identified by GUID's, and particular semantic relations could be defined such as "antonymy", so when you add a word that's synonymous with a particular meaning a dozen of other words link to, all of their ====Synonyms==== and ====Antonyms==== headers get automagically updated. Probably never gonna happen, but it's nice to fantasize about it. --Ivan Štambuk 11:15, 27 March 2008 (UTC)


 * {| style="background:#f7f7f7; padding:2px 2px 2px 2px;" border="0" cellpadding="0" cellspacing="0"


 * Yeah, I don't know about GUIDs but the kind of functionality you're talking about is old guard W3C, going all the way back to XLink. No established implementation yet but people have been working on this stuff for a long time.  Have you seen what they've been doing over at the Semantic MediaWiki project?  Sizzlin' hot stuff. ;^) --Struthious Bandersnatch 17:00, 27 March 2008 (UTC)
 * }
 * }


 * Personally, I'm not particularly fond of metacrap and it's implementation into natural languages constructions. Metacontent itself (like the dictionary definitions) at the semanteme level is entirely different category. However, If you do figure out a way to do something like lang=en|meaning=element, preferably in some nice visual interface that will list all possible basic meanings of a wikified entry in a drop-down list to hook onto, I'm sure people will listen. --Ivan Štambuk 18:17, 27 March 2008 (UTC)


 * My idea with would be to put that around one key word in a definition, or perhaps the context tag - or maybe even just a hidden text. That could then be linked to term - though there would be issues with maintainance and ensuring uniqueness I think this would be better than not being able to link at all. (See id id id for example.) Conrad.Irwin 19:08, 27 March 2008 (UTC)

Request for move of Category:Spices and Herbs
Could some admin please move Category:Spices and Herbs to Category:Spices and herbs and then notify me on my talk page? Afterwards, I will manually move all the entries from the old to the new category. Some background: When moving other categories, I have just created the target page as a new page and requested the old one for deletion, but this one has a history worth keeping, which would be lost in my usual procedure. --Daniel Polansky 07:48, 27 March 2008 (UTC)


 * Sorry, but not even admins can do this; see m:Help:Category. There is no way for the history to be preserved without also preserving the page at its incorrect title.  And there, folks, is yet another reason why putting significant content in category space is a Bad Idea.  -- Visviva 12:48, 27 March 2008 (UTC) forgot to sign.


 * I see there's a deletion notice on the old category page now. I suggest making it a redirect instead, in order to preserve the edit history (and adding an attribution notice pointing there on the talk page of the new category page). Cheers! <i style="background:lightgreen">bd2412</i> T 11:24, 27 March 2008 (UTC)

New ref functionality
I'm not sure of stable this feature is yet, but the current version of MediaWiki finally supports grouped references using the group parameter (a quick check of this is at User:Visviva/Ref test). Thus it is now possible to have footnotes that are limited to a language or etymology section. More radical things are also possible, although they would of course require some prior community discussion. Has anyone tried this out yet? Should we consider changing our standing anti-footnote policy? -- Visviva 09:40, 27 March 2008 (UTC)


 * Wonderful! Also — IMHO more broadly useful for us — each &lt;references/&gt; includes only the content of &lt;ref&gt;…&lt;/ref&gt;s encountered since the previous &lt;references/&gt;. So, we don't need to do anything special to give language sections their own references: just have a references section in each language that needs it, and it will automagically work. (See User:Ruakh/Cite for an example.) —Ruakh <i >TALK</i > 11:48, 27 March 2008 (UTC)


 * I actually made a test page for this too before I saw your two examples: User:Mike Dillon/Multi refs. Mike Dillon 03:06, 28 March 2008 (UTC)


 * Funnily enough, my page was for a long time a demonstration of MediaWiki's annoying bug/misfeature in this regard. Then it magically changed. :-) —Ruakh <i >TALK</i > 03:48, 28 March 2008 (UTC)


 * Good that it clears the references each time it generates them (which it always should have, that was just a bug). The harder part is getting it to format them for us. The HTML it generates by default is bogus. I looked at its configuration a long time ago, but since it wasn't useful anyway I didn't try too hard. We need something like:


 * [1] text of first ref note
 * [2] text of second ref note

but it uses &lt;LI> tags (unclosed IIRC) after explicitly starting a numbered list. And we only use numbered lists for defs. (This is part of the essential magic to make simple application parsing possible. ;-) So with, if you try to start an unnumbered list, you get:



which is not helpful. Apparently it generates the numbers explicitly in the text, and then for the references/ tag it just assumes they will be numbered the same. (all in all, pretty sloppy design...) We may yet need another fix to make this work. I'll go look at the doc again. Robert Ullmann 07:32, 28 March 2008 (UTC)


 * The real annoyance is that I can't get it to use the "many" format for "one" backlink, and the "one" format doesn't make the number available. It is sort of customizable, but falls just short of being really. (If you've seen this sort of stuff as much as I have, you'll note how common that is. Authors just don't bother thinking it all the way through. That's one reason we don't have the string functions extension: it is very poorly designed.)


 * How do we want this to look?

Use in running text, or what?
Placing these ref tags in the text of other sections promises to do awful things to anything parsing the wikitext. For example, a footnote might be very reasonable at the end of a translations line, but putting the text of the reference there is bad news. I'm sure you can think of a few dozen other horrible cases. We might do better ignoring the Cite extension (removing it), and using something like and  (much fixed of course, now they are a mess ;-) That way the text of the footnote is in the References section in the wikitext. Robert Ullmann 09:10, 28 March 2008 (UTC)


 * That seems plausible; IMO we should never be citing assertions in definition or translation sections -- though explanatory footnotes for translations might have some useful role to play -- so I had assumed that the primary use for this would be in Etymology and Usage notes sections, where being able to cite authoritative sources is often of great importance (and where the Cite approach makes a fair bit of sense). -- Visviva 10:45, 29 March 2008 (UTC)

New preference to search for double-clicked words
Some people have asked for the ability to quickly lookup the definition of any word in an entire (e.g. here). Well I put together a quick javascript that will do that on double-click. You can enable it in WT:PREFS (it's called "Automatically lookup words that double-clicked on (in main namespace only)"). It's not the brightest script on the block; it just parses by whitespace and is slightly dependent on the browser/OS setup. For example, double-clicking on a word right after an open-parenthesis will cause it to try and lookup the word with the parenthesis attached. I've only been able to test it on WinXP with IE 7 and FF 2. Comments and suggestions are, of course, greatly appreciated (better JS programmers, feel free to update the code).--Bequw → ¢ • τ 14:39, 27 March 2008 (UTC)

MediaWiki:Patrol-log-header
I've authored MediaWiki:Patrol-log-header, which appears atop Special:Log/patrol. Just FYI.&mdash;msh210 &#x2120; 18:56, 27 March 2008 (UTC)

Citations: tabs broken fix?
Hi all, I just rebuilt our Citations: tab javascript (MediaWiki:Common.js). The new version:


 * (old) Adds to mainspace, and makes it the correct colour using api.php.
 * (fixed) Adds to citationspace, and colours it red if it doesn't exist.
 * (new) Hacks citationspace tabs to be the same order as mainspace tabs.
 * (new) Changes the link from from Citations_talk:(.*) to Talk:$1.
 * (new) Adds an auto_redirect from Citations_talk:(.*) to Talk:$1.

While this works for me (on Debian Linux) in Firefox 2 and 3, IE6, Konqueror 3.5 and Opera 9 (And was reported as working in IE7 on http://devtionary.info/wiki/one and http://devtionary.info/wiki/Citations:fudgess) some IRC users complain that it doesn't seem to colour redlinks red, though from what they are saying everything else appears to work - and so I have left the Javascript in place. If anyone could confirm that it doesn't behave as detailed as above, or (even better) come up with a convincing reason for this, I would be very grateful. Conrad.Irwin 01:15, 29 March 2008 (UTC)


 * on XP Pro Firefox 3 Beta 4. - 01:19, 29 March 2008 (UTC)


 * Doesn't quite fix all the difficulties; there's one leetle problem left. Specific example:  If I go to Citations:listen, the "discussion" tab is red (because Citations talk:listen does not exist.  But when I click the "discussion" tab, I am taken to Talk:listen, which is the talk page for the entry, and which does exist.  Now, either we need an additional tab (and namespace?) for "Citations_talk", or else the discussion tab should appear blue when the talk page for the corresponding entry in the Main namespace exists. --EncycloPetey 03:41, 29 March 2008 (UTC)

It may make sense to add "&redirects=true" to the API call to get the API to resolve any redirects (including double redirects, which is kind of cool). I know we shouldn't have them in the main namespace, but it's unclear to me whether the prevailing opinion about where citations belong would favor the addition of redirects in the Citations: namespace. Mike Dillon 04:08, 29 March 2008 (UTC)


 * I am agreed with Mike Dillon about the redirect, I noticed that last night right before I went to bed, as for EP's comment, I think we ought to eschew Citations_talk namespace and have all discussions on the NS:0_talk pages. The discussion of citations is inherently the discussion of words, and it isn't as if the talk pages are overflowing as it is, combining them will result in less legwork to find relevant information.  We should still fix the CSS class of the discussion page so it shows up the appropriate color etc. -  10:31, 29 March 2008 (UTC)


 * Also: the red citations tab from NS:0 doesn't direct to the action=edit as it should. - 03:29, 30 March 2008 (UTC)

I have fixed all the above mentioned stuff except the &redirects=true, as I'm not sure what this would achieve - as redirects are treated as blue links anyway. It works for me (debain/linux) with IE6, Firefox 2 & 3b4, Opera 9, Konqueror 3.5. This has not been tested in IE7 though I am fairly confident it does work there confirmation would be nice. Conrad.Irwin 11:00, 3 April 2008 (UTC)

I am of the strong opinions that JS should not be used on a Citations: page to make link to Talk:, and that JS should not be used on a Citations talk: page to redirect to Talk:. The reason is that not everyone has JS; the vast majority of users do, but many have it turned off by default. These users will see links to Citations talk: and will post to that page, and other users — including, I suspect, all or almost all regulars — will never see the comments. JS is being used (in this case) to make the site work differently for JS-enabled and JS-disabled browsers, in a way that does not degrade, and that's a Very Bad Thing.&mdash;msh210 &#x2120; 16:18, 3 April 2008 (UTC)
 * There is a less friendly message on Citations_talk: pages for non-javascript users - see Citations_talk:one or [ Citations_talk:one?action=edit]. Conrad.Irwin 10:47, 4 April 2008 (UTC)
 * Fair enough.&mdash;msh210 &#x2120; 06:05, 6 April 2008 (UTC)
 * We are looking at the possibility of making Citations_talk an alias of Talk which would push all editors of the former to the latter using only the MW software and not js. - 10:31, 4 April 2008 (UTC)

Namespace
I've just found the Namespace page (linked from the namespace entry) is about two years out of date, so could do with an update or two. The stuff about Wikisaurus could probably be better worded as well. Thryduulf 03:21, 29 March 2008 (UTC)

Bot tries to add deleted page.
When a bot tries to add a page that had been previously deleted, it gets an exception thrown up "raise EditConflict(u'Someone deleted the page.')" and stops. Is there anything I can do about this? SemperBlotto 11:52, 29 March 2008 (UTC)


 * Since you don't care about that warning (you know you want to create every page you tried regardless of previous deletion) you can just comment out this chunk (lines 1216-1220 for me) in wikipedia.py:

elif '<label for=\'wpRecreate\'' in data: # Make sure your system clock is correct if this error occurs # without any reason! raise EditConflict(u'Someone deleted the page.')
 * and it should ignore previous deletion. - 16:30, 29 March 2008 (UTC)


 * I have followed your suggestion. It certainly ignores the deletion and attempts to load the article - I then get hundreds of lines of code and data flashing before my eyes and it carries on with the next article without actually loading anything. I might just have to live with it - it doesn't happen very often. SemperBlotto 17:31, 29 March 2008 (UTC)
 * Did you comment out the elif line as well as the raise line? I am not sure why it is spitting out the rest of the code...that is strange.  I don't really know Python but it sounds like it is treating the remaining chunk of the file as a string for some reason and printing it where it used to raise the error... -  17:56, 29 March 2008 (UTC)
 * Yes - both commented out. But no, the mass of "debugging text!" comes from somewhere else when it tries to load the page. SemperBlotto 17:59, 29 March 2008 (UTC)
 * Just commenting it out leaves a page that the following code doesn't fully understand (note the else clause on that same conditional does the output(data) call). And it still doesn't work: this condition is raised after the page save attempt. The problem is because pagefromfile uses page.exists to test if the page exists (it doesn't) and then tries to put it without doing a get first. It should just call get, and catch the no page condition (and "is redirect") and write the page. I re-wrote this for Keene, but I don't think he used it (and I haven't tested it): User:Keenebot2/code. So it might work ... Robert Ullmann 11:36, 30 March 2008 (UTC)


 * While I haven't tested that particular code, Tbot does the same sequence and it does work. Look at this edit, which I think is pretty cool for being entirely done by automation ;-) Robert Ullmann 16:52, 1 April 2008 (UTC)

Another new page problem
There must be something amiss in the templates that allow users to create new entries. See this example (since deleted by Semper). I see this a lot, namely, situations where the entire definition is added as a red-link template. Can this be corrected? --EncycloPetey 17:45, 29 March 2008 (UTC)


 * I certainly see this all the time as well, and find it annoying. This particular example, by the way, would be OK in Latin as a form of the verb bacchor - to celebrate the rites of Bacchus. SemperBlotto 17:51, 29 March 2008 (UTC)


 * Well, I can't fix it just now, but here is where to look: Wiktionary talk:Project-Nogomatch for the documentation, MediaWiki:Noexactmatch for the implementation. - 17:53, 29 March 2008 (UTC)


 * I think this comes from the placement of on the definition line.  Users who are not familiar with MediaWiki aren't likely to recognize that as a template call, and some just assume that text needs to be surrounded by curly brackets.  Suggest we replace  with something like "Write the definition here."  -- Visviva 02:55, 30 March 2008 (UTC)


 * I tried replacing {substub} with (put definition here) but Connel immediately reverted it, saying the instructions are clear ... but of course no-one ever reads the instructions. (And they make little sense if you don't know wikitext anyway.) Users of course assume that the {}'s are part of syntax they shouldn't modify. Users almost always do this. Really could be made a lot better if it was designed to be used without any separate instructions. We could put some variant of at the bottom, to be removed by anyone who knows what it is. Robert Ullmann 11:45, 30 March 2008 (UTC)


 * That sounds like a good idea. Perhaps have a template that added a message along the lines of "This is a new entry that has not yet been reviewed by experienced Wiktionary contributors. It may require cleaning up to match Wiktionary's usual formatting standards." with an apropriate link to our formatting standards. Thryduulf 15:48, 30 March 2008 (UTC)

Language templates - time for a change
Hi all, with the new ISO for languages coming out at some point needing four letters per language, I feel it is time to move away from the system of reserving short template names for language codes. The most obvious solution seems to me to put the language names into Template:lang:___ and to use a wrapper template like to handle which languages should be linked to - though it would of course be possible to have a seperate prefix for handling linked language names.

To this end I have filled in the gaps in Template:lang:ISO_639-1, using pywiki through my user account, and am happy to fill in all the others (though I would probably do it through a User:Conrad.Bot as there are quite a few). I feel we should, once this process is complete, start migrating away from the old system. I'm not sure how many editors this would affect directly - but it would certainly mean that most of our templates would need looking at. What are anyones thoughts on this? Conrad.Irwin 23:01, 30 March 2008 (UTC)


 * I think this is a good idea, merging all the language codes into one syntax which is easy to form. We should probably come up with a complete list of templates which need to be revamped to handle this, especially the biggies. -  23:04, 30 March 2008 (UTC)


 * But we should probably keep reserved some (all?) of the codes for reasons of subst'ing templates, yes? Unles the French Wiktionary and others are also planning such a major shift. --EncycloPetey 23:40, 30 March 2008 (UTC)


 * If this is 639-6 we're talking about, it shouldn't be a problem AFAICS. At least, I assume we do not want to give uncontested dialects like North Midland the same privileged treatment we give to ISO 639-3 languages (?).  The system you describe sounds like an improvement, but I don't think there's a need to uproot the existing system in a hurry (or at all). -- Visviva 23:45, 30 March 2008 (UTC)

No. This is a misunderstanding of -4 and -5 (note, neither, like -1, -2 or -3, is a reference to code length, -4 is guidelines, -5 is more 3-letter codes, and -6 is 4 letter codes for variations that we will not use, not being languages) All of the 7000+ languages that we will want to treat as such will be coded with 3-letter codes if not already. This is a solved problem, and does not require any handling. As Conrad says, "wraps" it until we can finally, long overdue, rid ourselves of the explicit linking in the templates.

Under no circumstances whatever are we going to need 4 or 5 letter codes for languages we want to code; they will go into the 3-letter namespace, which has 17K+ codes, with most available. The "lang:" templates are a stopgap; they should go away. Robert Ullmann 23:53, 30 March 2008 (UTC)


 * Ok, that makes sense - but would it be better to have them all under lang: instead of just as short template names anyway? That way we need not worry about getting any language starting with rf_. What needs to be done to remove the links in the templates, presumably checking all the templates that we do use for places where the templates are used rawly instead of a call to ? Conrad.Irwin 11:00, 31 March 2008 (UTC)
 * Having them all under lang: would defeat one of the primary purposes of having them at all, which is use for subst'ing. Several wiktionaries make use of these same templates, especially the French Wiktionary which uses them for L2 language headers, for example.  The original purpose in having them was that they are consistent in content and familiar across multiple projects.  Users who come here and add just a few words may do so by using  or  is a Translations table or as a section header because that's the practice they know from the Wiktionary where they usually work.  We created these templates originally so that they can be regularly subst'ed by a bot.  If we change the template names, that parallel meaning across wiktionaries is lost and we can no longer subst by bot.  Instead we have to go looking by hand at individual uses and verify the intent before correcting them. --EncycloPetey 17:26, 31 March 2008 (UTC)

Template:en-adj
This template deosn't seem to support the case where one sense of an adjective has a comparative and superlative, and another sense where it is absolute. See pointed:, where I believe the first sense has comparative and superlative forms, but the second sense does not. --EncycloPetey 03:24, 31 March 2008 (UTC)


 * Does it need to? The sense could just be tagged . -- Visviva 08:14, 2 April 2008 (UTC)


 * It ought to have the option so that it will parallel what we do for nouns. The  template allows coding to demonstrate that a plural form exists, but the noun has uncountable senses.  The adjective template ought to have the same functionality for the sanity of editors. --EncycloPetey 17:43, 3 April 2008 (UTC)