Wiktionary:Grease pit/2007/May

Looks of the English Wiktionary
I'm jealous of the French Wiktionary. Their little logos and neat translation tables, as in kiosque, make me jealous of their Wiktionary's looks, compared to the rustic English one. I have no intention of doing it myself (!!), but is it even possible to redesign such a big thing? 203.214.100.198 10:11, 1 May 2007 (UTC)


 * No. The means by which the French are able to do this is through extensive use of templates in section headers.  We don't use them here for many reasons, and we're not likely to because of the associated editing problems they would cause.  By the way, note that (1) the French language is overseen by a regulatory committee, and (2) their translations for kiosque cover only one of the stated definitions. --EncycloPetey 15:52, 1 May 2007 (UTC)

Please, tell me (again) why we don't use templates for section headers ? Would seem ike a good idea, so why not ? Where is the discussion archive for and against ? Is it time to revisist any such decision ?--Richardb 06:13, 13 May 2007 (UTC)

How to add a comment after a Hebrew word
On fray there are some Hebrew translations listed that I'd like to reformat (for one they aren't formatted as links), however, when I place the cursor after the word and type, the cursor jumps to the front of the first word. What is the magic trick to handling scripts that move from right to left? I'm sure there's a tutorial for this somewhere if only someone could point me in the right direction. __meco 19:55, 4 May 2007 (UTC)


 * It depends quite a bit on what king of computer, operating system, and browser you are using. You can usually figure it out with some trial and error. If not, you can always copy and paste it into a text editor that doesn't suffer the same problems. Good luck! &mdash; Hippietrail 20:35, 4 May 2007 (UTC)


 * The simplest way is to copy each word in there and place them one by one, adding the romanization after each word. That way, the roman script add stands between the two Hebrew words and makes the comma realign to LTR format.  --Dijan 20:42, 4 May 2007 (UTC)


 * A standard trick is to type in the line w/o the RTL words, then copy and paste each into the correct place. In this particular case, put the cursor after the character following the word (so you are in LTR territory), backspace del that char, then add what you want and that character back again. If you are using Firefox, you can also look at the bidi.* editing preferences. Robert Ullmann 13:38, 9 May 2007 (UTC)


 * Oh, just to be snarky: it isn't "jump[ing] to the front of the first word", it is moving to the left end of the Hebrew: to the end of the last word. ;-) :-) Robert Ullmann 13:41, 9 May 2007 (UTC)

m&
Why does m& always show up as a red link? bd2412 T 21:50, 8 May 2007 (UTC)

You've got an HTML entity in the link -- the unicode ḿ works for me. Cynewulf 22:02, 8 May 2007 (UTC)
 * The link works for me, but it shows up as a box on my screen. bd2412 T 03:10, 9 May 2007 (UTC)


 * Looks fine in my versions of Firefox, as well as IE 7. --Connel MacKenzie 13:27, 9 May 2007 (UTC)


 * Using an HTML entity in a link is fine. The problem is that "m" + 0301 is a two character sequence that is not the same page title as ḿ, until you try to actually go there, and then the WM s/w converts to the composed character. (This is the same reason that CJKV compatibility-block links look red, but if you go there you get the proper page. But in edit mode.) WM bug: it ought to be applying the same unicode composition and unification rules when looking up links. (Just to make that even slower ...) Robert Ullmann 13:58, 9 May 2007 (UTC)


 * Actually, it is a known "feature", not a bug. :-)  That is why we never use html within a wikilink - it is never "fine."  (And isn't it merely the local Javascript + [Go] button magic, that funcations as the work-around?  That isn't part of the base WM software.)  --Connel MacKenzie 04:19, 10 May 2007 (UTC)


 * I have heard it said that the difference between a bug and a feature is that a feature has documentation. --EncycloPetey 04:22, 10 May 2007 (UTC)

railroading template
As railroading is a US-specific term, I think it would be better to move to  or  - the first matches the culturally neutral term used at en.wikipeida, the second matches the name of the category the template places articles in here.

As rail transport terminology is possibly where British English and American English are most different, would it be (a) desirable and (b) possible (via a parameter?) to classify words as being British or American? Thryduulf 14:06, 9 May 2007 (UTC)


 * It's just the name of the template. What the template displays could easily be adjusted, but there is no reason to propagate new templates.


 * You can mark a term as or .  If you need to mark that and that it is a rail specific term, you could use:  .  The  template allows you to combine in-line templates at the head of a definition. --EncycloPetey 15:49, 9 May 2007 (UTC)
 * I support changing the name of the template because "railroading" has come to have a completely different meaning in the U.S. (as in 'my boss is railroading me out of my job' or 'they're railroading that bill through Congress'). bd2412 T 16:21, 9 May 2007 (UTC)


 * I support changing it to and adjusting the displayed text as above.  :-)  Oh wait, is that a language code?  --Connel MacKenzie 04:22, 10 May 2007 (UTC)


 * No, not one of the defined 2-letter ISO language codes. It might be a country code, I suppose. --EncycloPetey 04:25, 10 May 2007 (UTC)
 * Oh, then just change it to . bd2412 T 05:09, 10 May 2007 (UTC)
 * Agreed. --Connel MacKenzie 18:15, 14 May 2007 (UTC)

Following this discussion I've moved the template to with a redirect from, and changed the wording to "  ". I've left a redirect in place from, depending on what is common practice at Wiktionary, someone may wish to change those entries that still use the original. Thryduulf 19:35, 14 May 2007 (UTC)


 * We have three new bot operators this week; I'll leave these template changes (using template.py? Nah, using replace.py) to them, as learning exercises.  --Connel MacKenzie 17:53, 17 May 2007 (UTC)

Korean entry category index
As I just posted at Template talk:ko-pos, does not allow a jamo-indexed category, so using it categorizes the entry under the full hangeul character. Should there be a jamo index parameter on that template? Rod (A. Smith) 04:48, 10 May 2007 (UTC)

extra characters
Hi gang. Small request, could someone who knows how please add ġ,ċ,ǣ to the Latin/Roman character set - you know, the one that appears at the bottom of the page while editing. Ta, Widsith 13:51, 10 May 2007 (UTC)


 * I'd edit MediaWiki:Edittools, but I don't know what order your want them added. --Connel MacKenzie 14:58, 10 May 2007 (UTC)

Ah, so that's where it is! OK, let me have a go myself... Widsith 15:18, 10 May 2007 (UTC)
 * It looks to me as though ġ,ċ are both listed under Maltese and Old English already. Are they used in other languages? --EncycloPetey 15:50, 10 May 2007 (UTC)

Oh my god...I'm blind. Goodnight. Widsith 15:51, 10 May 2007 (UTC)

Krung thep mahanakhon amon rattanakosin mahinthara ayuthaya mahadilok phop noppharat ratchathani burirom udomratchaniwet mahasathan amon piman awatan sathit sakkathattiya witsanukam prasit
This could just be a problem of my browser, or I may have coded it incorrectly, but the Thai translation (mentioned in the etymology as well as in the translations section) of Bangkok's official name won't wikilink, it just sits there with the brackets showing, refusing to turn blue or red. I'd assume it to be an issue of the length of the term, but the English links work. &mdash; Beobach972 18:34, 11 May 2007 (UTC)


 * The length of the romanized title is 188 characters; the length of the Thai version is 417 characters? Assuming each character is url-encoded to 9 ASCII characters, that would be an URL (just a portion of the URL itself, mind you) of 3,753 bytes.  Offhand, I'm not sure the HTTP protocol allows such a thing.  --Connel MacKenzie 18:16, 12 May 2007 (UTC)


 * Oh, wow, a three thousand byte link. You know, that might explain why the link doesn't work... &mdash; Beobach972 03:30, 13 May 2007 (UTC)


 * I suppose one solution for that would be to have the entry at a shorter title. I know that we have quite the tradition of any word or phrase being at and only at its exact spelling, but perhaps this is a situation where we would have to make an exception.  I imagine we've all seen Wikipedia articles where it starts as "This article should be foob, but is here at Foob because of naming conventions" or something like that (apparently they've changed their policy, as iPod is now at.....well it's at iPod).  I don't know if anyone has any great ideas as to where we would put such an entry, or if people think that's a hideous idea.  But, it's something the community can chew on.  An entry does us little good if it can't be linked to.  Atelaes 05:48, 13 May 2007 (UTC)


 * BTW, I was off by a factor of three in my earlier calculation. Still, over 1KB for the "word" seems pretty unmanageable.  It might be worth-while to determine what the precise maximum is, then back off enough to allow "..." at the end.  (Three periods, not the unicode elipis, right?)  --Connel MacKenzie 06:02, 13 May 2007 (UTC)


 * The limit seems to be an 800 character URL: กรุงเทพมหานคร อมรรัตนโกสินทร์ มหินทรายุธยามหาดิลกภพ นพรัตน์ราชธานี บุรีรมย์อุดมราชนิเว... :-)  --Connel MacKenzie 06:31, 13 May 2007 (UTC)


 * First of all, because you can see the brackets, this isn't a problem with HTML, it's a preset limit in the wiki software. The question is why the software would have a limit on the title of a page. Connel suggests, essentially, that it has to do with a limitation of the web. However, I don't think there's a limit to the size of an HTTP request, and although a server may set one artificially, it's usually around 128KB by default. More likely it has to do with a limitation in the database, and even then, a purely arbitrary one. DAVilla 15:21, 13 May 2007 (UTC)


 * I see that 2616 section 3.2.1 recommends avoiding serving lengths over 255 characters. It says "unbounded length" if (and only if) the urls it generates would result in unbounded lengths.  So the RFC says in essence, that you have to limit what you are doing on the server side, and not serve up pages you can't handle.  I'm sure the developers weighed their choice carefully, as each additional character takes up an extra byte in each thread of each caching server, apache and database server.  So, if 800 characters is exceeded on WMF servers, it is supposed to return an http 414 (Request-URI Too Long) - which seems like it is generously more than 255, yet still a reasonably enough limit for squid and apache server performance to persist under a load of 33,000 page requests per second.  A side benefit of their choice is that silly section titles (like this one) are allowed.  --Connel MacKenzie 10:24, 14 May 2007 (UTC)


 * Also please note that that section of 2616 is talking about the URI (the part you see in the "Location" or "Address" bar of your browser.) The complete request (with all your cookies, authentication, POST data, etc.,) used to be limited to 1 MB until the vandals started with really big pages of links - so I don't know what it was lowered to, now.  I haven't hit the limit in a while.  --Connel MacKenzie 10:36, 14 May 2007 (UTC) (edited) 10:39, 14 May 2007 (UTC)


 * Okay then, corrected to a limitation in the you-are-eye, and maybe not as arbitrary, but still on the wiki server side of the equation. DAVilla 21:48, 14 May 2007 (UTC)

Special:Uncategorizedpages cleanup
In cleaning up Special:Uncategorizedpages, I have come to notice that a sizeable number of the pages I have cleaned so far have been English proper nouns. Therefore, the thought occurs to me : can somebody design a robot to automatically replace the following wikitext

==English== ===Proper noun=== x

with accommodations made also for

==English== ===Proper noun=== x

and for

==English== ===Proper noun=== x

with the code

==English== ===Proper noun===

(the ‘|sg=x’ portion of the code being offered in case the bot cannot simply do away with ‘ x ’ and replace it with ‘ ’, which would probably be preferable)

? &mdash; Beobach972 19:13, 11 May 2007 (UTC)


 * Hmmm. Finding which ones need it is the (almost) tricky part.  From that list, I assume it would be something like:


 *  $ python replace.py -file:listofentries.txt -regex "==English==(.*?)===Proper noun===\r\n\'\'\'(.*?)\'\'\'\r\n" "==English==\n\n===Proper noun===\n\n\n" 


 * or something close to it. (My regex is poor.)  Is that what you are thinking?  (Wow, someone actually looks at Special:Uncategorizedpages now?!  Wow, wow!)  --Connel MacKenzie 19:25, 11 May 2007 (UTC)
 * P.S. There should be a blank line after "==English==" and after " ".  --Connel MacKenzie 19:25, 11 May 2007 (UTC)
 * ‘nowiki’ tags inserted around &mdash; Beobach972 03:36, 12 May 2007 (UTC)


 * Thanks for the help! That sounds like it would work. I badly need to brush up on my python skills (I posted in the hope some robot operator – AF? – might just incorporate the task), but I'll play around with this and see what I can do. &mdash; Beobach972 03:36, 12 May 2007 (UTC)
 * And yes, I'm determined to clear out that monstrous list. On fr.wikt it's a useful tool, but here it's just... monstrously large. &mdash; Beobach972 03:36, 12 May 2007 (UTC)


 * I think there is an enormous difference between using the pywikipediabot framework, vs. coding new python modules...the former just about anyone can do, easily.
 * Obviously, on en.wikt:, we haven't worried about categorization of all entries (at all.) I don't think it is common practice to even hope for a category, for each entry here.  But starting that practice should be a Good Thing, overall.  It might be worth discussing in WT:BP.  At any rate, good luck with it.  I'm not seeing an easy way (yet) of generating a list of them.  Perhaps a starting list would be all entries that contain "===Proper noun===" but don't have  "{{" or "Connel MacKenzie 16:26, 12 May 2007 (UTC)


 * Oh drat, I forgot what a pain it is to deal with line breaks. Anyhow, I count 15,197 proper noun entries that seem to need this correction (but I didn't filter for just English.)  I'm not seeing an easy way to do this.  --Connel MacKenzie 18:05, 12 May 2007 (UTC)


 * Alright, first note : I've started setting up the pywikipediabot framework and all of the requisite stuff, as of yesterday (strange though it may seem, I've never had occasion to use it before &mdash; that's partly what I meant above about my coding skills).
 * On which note: can one of you robot operators pass along to me the code that makes the robot stop making edits if anybody posts a message on its talk page?
 * Second note, as for how to acquire the pagenames : couldn't I just tell the bot to edit every page listed in Special:Uncategorizedpages (or make a list by copying all those pagenames) that contained the specified code? I'd tell it to look for ‘ ==English== ===Proper noun=== x ’, and if it found only ‘ ==German== ===Noun=== x ’ on the page, and no English proper nouns, it would just skip that page, correct? &mdash; Beobach972 00:08, 14 May 2007 (UTC)


 * Um, right, that was the part that I was saying is a pain. I can never seem to remember if a line break (in the pywikipediabot context) is "\r\n" or "\n" or "\r" or something else.  Regex wildcards sortof work, but have a tendency to go too wild on the wildcard.  Anyhow, yes, -ref:Special:Uncategorizedpages is fine as a starting point...as long as you don't say "Yes to all."  Note corrections above.  --Connel MacKenzie 18:12, 14 May 2007 (UTC)


 * Alright, noted. I'm still working on it. Putting in the exception (if that's what you were saying was hard) seems straightforward, which is probably a bad sign and means it'll be ridiculously complicated (haha). I suppose there is no rush, though, since I seem to be in the minority of wiktionary users who even look at that page to begin with. &mdash; Beobach972 05:05, 17 May 2007 (UTC)

Link to vi:wikt
Notice the link to the Vietnamese page on the side of this (up above, where the interwiki links usually are)? Yea, well, as far as I can see, it's completely erroneous &mdash; and it's inserted by the inclusion, at the very top of the page, of Discussion rooms. I'd comment it out there, but it occurs to me... do we actually want that to appear on all of the pages? It could be useful, since not all FL wikts would have discussion pages that matched ours exactly, but they probably have a general discussion page to which we could link. &mdash; Beobach972 14:01, 14 May 2007 (UTC)


 * (shrug) I don't see the harm, either way. It is only four pages, right?  --Connel MacKenzie 17:50, 17 May 2007 (UTC)

POS template parameters
I've come across this problem a bit recently. The "sg=" and "pl=" parameters which are useful on as the only way I know of to use the template to link individual words of multi-word headwords, do not exist across all the POS templates. Some examples: limpieza de sangre (where doesn't take the parameter), ad referendum, and arma de fuego where adding  with the "pl=" parameter added extra brackets. I think that all nouns in all languages (?) should be able to take the parameters, and all adjectives and adverbs as well to have a similar functioning parameter. "Singular" and "plural" don't really make sense in all languages for adjectives and adverbs, but some parameter for wikifying individual words of compound headwords is still needed. This is a bigger problem than I can tackle because 1) looking at the code for makes my brain hurt, and 2) having poked around at other language POS templates, as well as the other English ones, it looks like very few have a mechanism for this. Time for some automated addition across all the templates? Dmcdevit·t 07:32, 19 May 2007 (UTC)
 * In what languages can adverbs by plural? --EncycloPetey 20:11, 20 May 2007 (UTC)
 * Good point. But my general point--that we need parameters to link multi-word phrases in all POS templates in all languages, plus consistent parameters like "sg" and "pl" where appropriate, and it's difficult for the less technically inclined to figure out how to do that on a case-by-case basis--still seems relevant. Dmcdevit·t 08:08, 24 May 2007 (UTC)

Template "gerund of"
with "lang=Italian" does not add the word into the category Italian gerunds. This seems to be at odds with "past participle of" and "present participle of". SemperBlotto 08:34, 19 May 2007 (UTC)


 * Compare . I think I've fixed it now.  --Connel MacKenzie 15:19, 19 May 2007 (UTC)
 * Thanks. That's fine. Category:Italian gerunds still looks enpty, but I believe it will catch up some time. SemperBlotto 16:29, 19 May 2007 (UTC)

Section edit errors - involuntary archiving
Some edits on WT:BP today of sections, ended up overwriting previous sections when saved.

I've archived off a large portion of it, with the hopes that it is simply a new feature of WM that involuntarily (and randomly) starts archiving sections if a page is over 600KB.

I'll rebuild the index portion of WT:BPA when I find the program I used to use for that. Hopefully not too much has changed and I won't have to rewrite much of it.

Any progress on someone contacting Werdna's replacement? --Connel MacKenzie 15:13, 19 May 2007 (UTC)


 * I'm seeing what could be a similar probelm whiule editing this page. Above, you'll see duplicate sections.  I even tried to delete one of the duplicate sections, but to no avail -- it's still there. --EncycloPetey 20:15, 20 May 2007 (UTC)

Template "gerund of"
with "lang=Italian" does not add the word into the category Italian gerunds. This seems to be at odds with "past participle of" and "present participle of". SemperBlotto 08:34, 19 May 2007 (UTC)


 * Compare . I think I've fixed it now.  --Connel MacKenzie 15:19, 19 May 2007 (UTC)

Section edit errors - involuntary archiving
Some edits on WT:BP today of sections, ended up overwriting previous sections when saved.

I've archived off a large portion of it, with the hopes that it is simply a new feature of WM that involuntarily (and randomly) starts archiving sections if a page is over 600KB.

I'll rebuild the index portion of WT:BPA when I find the program I used to use for that. Hopefully not too much has changed and I won't have to rewrite much of it.

Any progress on someone contacting Werdna's replacement? --Connel MacKenzie 15:13, 19 May 2007 (UTC)

Would someone like to create a template for words that are in the past tense but not past participles please. Then add it to forswore. SemperBlotto 19:46, 21 May 2007 (UTC)
 * Already exists: --EncycloPetey 21:39, 21 May 2007 (UTC)


 * I've just added that redirect from to .  --Connel MacKenzie 19:21, 24 May 2007 (UTC)


 * How would one go about adding to the Templates that show at the bottom of every edit screen? &mdash; Beobach972 03:35, 31 May 2007 (UTC)


 * Edit the first section of MediaWiki:Edittools. Be Bold.  --Connel MacKenzie 07:14, 1 June 2007 (UTC)

Problem with Search 1
I did a search for pants in the Wikisaurus namespace only. The result I got showed 5 entries. But at the top of the page it said Results 1-100 of 299, and also showed there were 3 pages of results 1 2 3 Next » . But if you click on 1, 2 or 3 or >>, you don't get anything meaningful.

Anyone know how to get this problem fixed ?--Richardb 06:09, 24 May 2007 (UTC)


 * There is a new experimental search engine at http://ls2.wikimedia.org/ currently in testing. The "1, 2 or 3 >>" problems are that the Lucene search compiles all search results, then filters what it displays based on which namespaces you select.  Of the 299 pages that contain "pants", five of them were in the Wikisaurus namespace.  --Connel MacKenzie 18:54, 24 May 2007 (UTC)


 * The search on all the Wikimedia projects is lousy. You get numerous false positives, and I often get "3 pages" of results, but only two of those pages exist.  It's frustrating, so I usually use the Advanced Search function on Google instead, if I really want to find what I'm looking for. --EncycloPetey 19:07, 24 May 2007 (UTC)

Problem with Search 2
I did a search for pants in the Wikisaurus namespace only. The result I got showed 5 entries. OF which one was your favourite entry and mine, Wikisaurus:sexual intercourse. But, when I went to that page and searched it for "pants", I didn't find the word. So why does that page show up in the search results ? "pants" wasn't in the Wiki code for the page either.

Makes it seem like search is not very reliable. Anyone know how to get this problem fixed ?--Richardb 06:09, 24 May 2007 (UTC)


 * See above. This is a similar problem, where "sometimes" (conditions unknown) the Lucene search lags behind the DB masters.  A null edit to the page should cure it.  --Connel MacKenzie 18:55, 24 May 2007 (UTC)
 * Well, actually, it links to the "/more" page, which does contain the word "pants." --Connel MacKenzie 19:18, 24 May 2007 (UTC)

Special:Wantedcategories
Any ideas how to clear these out? There are lots that have only one entry in the category - those should just have the respective entries corrected, right? --Connel MacKenzie 18:50, 24 May 2007 (UTC)
 * Not if they're calling a standard category for a language that simply doesn't yet have that category, but a quick look shows a significant fraction are calling categories that exist under a different name. Do we know how many of the rest are the result of Transwiki articles? --EncycloPetey 19:04, 24 May 2007 (UTC)


 * I have no idea. I'm asking what people think a reasonable starting point is, for working this list.  Remove all categories from the Transwiki: namespace?  Maybe.  --Connel MacKenzie 19:20, 24 May 2007 (UTC)

Perhaps useful would be to pair up wanted categories with extant categories (category:Sports terminology & category:Sports seem an obvious example) and get a bot to perdiodically check the special:wantedcategories and move articles according to the list. The same could be done with categories that obviously don't belong on Wiktionary, e.g. category:Articles for deletion. Thryduulf 18:02, 30 May 2007 (UTC)

There doesn't seem to be too much noise in the list; the Min Nan POS cats need to be created. A lot of the topic cats also simply need to be created. Note that AF is categorizing a lot of entries as it converts tags. (matrix wasn't in Archeology, happo wasn't in any cat, etc, etc.) Robert Ullmann 10:32, 31 May 2007 (UTC)

Automate 'pedia links?
Can we have a bot add to our entries that in fact have corresponding Wikipedia pages (and perhaps even add  where applicable)? bd2412</i> T 01:26, 31 May 2007 (UTC)


 * How will the bot tell whether it's an entry, redirect, or disambiguation page. I'm not saying this is a bad idea; I'm just wondering about the specifics of execution.
 * Example 1: We have an article on the word accord (concept) and an entry nominated for deletion for Accord; which one (or both) would end up linking to a wikipedia article about Accord. There might be a lexical connection between the two, but there might not. In some cases like this, our article will not match Wikipedia content at all because of CFI and the simple fact that we are a dictionary and they are an encyclopedia.
 * Example 2: We have an article on heather (plant) and one on Heather (feminine name), but Wikipedia has only a redirect to Calluna (about the plant). So how would the link be set by automation?
 * --EncycloPetey 01:42, 31 May 2007 (UTC)
 * How about this: we tell the bot to look for disambiguation pages first, so heather would get a link to  , provides options including the plant. I'm sure we can also tell the bot to ignore redirects. <i style="background:lightgreen">bd2412</i> T 01:59, 31 May 2007 (UTC)
 * Uh... I thought we weren't using anymore.  In any case, that doesn't solve the problem of potentially having articles linked whose only commonality is shared spelling.  I can't think of a good example off the top of my head, but say (hypothetically) Wiktionary has an entry on the archaic nautical verb flubel, and Wikipedia's only article on the subject is about a Swiss manufacturer of chocolate clocks.  How will having the link help? --EncycloPetey 02:15, 31 May 2007 (UTC)
 * We have to use wikipediapar in some instances (look at what links to it and you'll see some obvious cases). But I think that the number of right hits will so outweigh the 'false positives' as to make it worthwhile. An editor can later sort through the bot's edit history and look for probable false leads. If we restrict the bot to, say, five-hundred edits a day, it should be fairly quick to peer through the obvious questionables. I'd be keen to know how much overlap there is now between Wikipedia titles and Wiktionary titles. <i style="background:lightgreen">bd2412</i> T 02:26, 31 May 2007 (UTC)
 * Using is always wrong.  Use  instead.  --Connel MacKenzie 03:07, 31 May 2007 (UTC)


 * The function of {wikipediapar} was added to {wikipedia} quite a while ago; it is now just a redirect. AF routinely replaces it, because people continue to use it. (no big deal) Robert Ullmann 04:30, 31 May 2007 (UTC)


 * I dislike our current "Wikipedia" link as it is. (Messes up formatting, moves around depending on whether or not there is a TOC, is inconsistent in what it links to, disambig, redirects or content, pushes images around, moves [Edit] links randomly, takes up more than three lines again, - because someone screwed up the image on commons or something, doesn't wrap long titles cleanly, doesn't allow for multiple links cleanly, etc.)  There probably should be something that allows a person to look it up in the Encyclopedia as well...if the WikiMedia software has an efficient method of crossing projects, that would be even better, no?  But no matter what, we should be moving away from the DIV box thing, and listing the textual link (wrapped in a CSS class?) more like  does.
 * All that said, I think this bot proposal has a great deal of merit. (If, on the other hand, you are just curious about "title" matches, the XML dumps do have a separate file with just the article titles.)  With en.wiktionary content still under 500MB, you can only pretty easily filter out entries that already have those links, and provide a complete list of entries you'd like bot-edited, for community review.  But I see no reason to exceed the 100-at-a-time guideline during the discussion phase.  --Connel MacKenzie 03:03, 31 May 2007 (UTC)
 * The link is what it is. As for matches, I've started doing -ism's by hand (as it were) - those fairly universally match up well (as would, I think, the -ology's). No problem with a 100-at-a-time limit. <i style="background:lightgreen">bd2412</i> T 03:35, 31 May 2007 (UTC)
 * Don't forget that there's also a for use in a See also/External links section. So, we already have a way to cross-link that isn't visually intrusive.  I do link the floating box, though, even if it does have formatting issues to be resolved. It can be very useful to know right away that there may be a longer, more detailed article on Wikipedia. --EncycloPetey 21:43, 31 May 2007 (UTC)


 * I would suggest starting by downloading the XML for the en.wikt (48.1 MB compressed), the XML for the the 'pedia article titles only, and generating a list of candidates, rather than tryng to add the templates. Then we can look at the list. I would think almost all would require some kind of evaluation. But we can look first and see. Robert Ullmann 04:30, 31 May 2007 (UTC)


 * Oh, a good list of candidates are the 'pedia entries that link to wiktionary ;-) Robert Ullmann 04:37, 31 May 2007 (UTC)
 * Hmme, there are few enough of those that I can do them by hand. Thanks! <i style="background:lightgreen">bd2412</i> T 20:05, 31 May 2007 (UTC)


 * I see a vote is being drafted. I'll note that the current draft does not include my preferred option, which covers more difficulties than any of the listed proposals do.  For one thing, I have started seeing pages that link to the Arabic Wikipedia, the Russian wikipedia, the Japanese wikipedia, etc.  I'm not sure how we can or should handle that, but as far as linking to the English Wikipedia, here is what I would want:  "Each Wiktionary entry should have no more than one Wikipedia link in the form of a box.  It should link to the most generally relevant page on Wikipedia, even if that is a disambiguation page.  All other links should be in-line."  I would prefer to see the in-line links relegated to the bottom of the entry, rather than mixed with the definitions, but that's likely a separate issue.  --EncycloPetey 20:22, 28 June 2007 (UTC)