Benutzer Diskussion:Stefan Kühn/Check Wikipedia/Archiv/2009/April
Re-scan the French dump ?
Hi Stefan, as you know, less than half of the articles of the French Wikipedia dump were scanned last time (actually the last time it was fully analyzed was in October 2008) since the script hit the limit of 40.000 detections. Our community is quite annoyed that many high-priority errors remain that could not get detected at that time. My questions:
- Now that you improved the daily processing of the script, do you plan to increase the max detection limit of the full dumps (larger than 40.000), so that they would be more fully analyzed and a more complete list of erroneous articles would be available for the daily scans ?
- If you do, is there any way to force another analysis of the last full dump, instead of waiting for the next one to be produced ?
Sorry to bother you again, your project is so useful to us. Thanks -- Laddo 66.131.214.76 04:56, 2. Apr. 2009 (CEST)
- Hallo Laddo, my script now 3 kinds of modus operandi ("dump", "live" and "only"). "dump" mean scan the newest dump. "live" mean a scan like the script daily works and "only" scan in the dump only for error nr.X. - Daily my script check the dump-page and if there is a newer dump, then start "dump" and then "live" else only "live". So it is no problem to scan daily the full dump it is only a question of time. If you say "Yes we want a new dump scan" then I can do this. If there is no reason or no user request for a new scan of dump, then I wait for the new dump. So tomorrow my script will scan the last frwiki dump. No problem. :-) -- sk 09:51, 2. Apr. 2009 (CEST)
- Fantastic, thanks. -- Laddo 199.22.57.2 18:37, 2. Apr. 2009 (CEST)
Hmm...
Hi Stefan, zunächst freut es mich, Dir mitteilen zu können, daß pdcwiki alle Deine Verbesserungsvorschläge aufgenommen und umgesetzt hat. :-)) Dennoch hat das Skript folgende Auskunft ausgespuckt: With the last scan the script checked 1284 articles. At the moment the script identified 0 ideas for improvement in 2 articles. Vielleicht findest Du den Grund hierfür. Viele Grüße und vielen Dank für Deine tolle Arbeit -- Marbot 21:26, 1. Apr. 2009 (CEST)
- PDC ist mein Lieblingswiki zum testen. :-) Schön klein und nicht so viele Fehler. Ich hab in den letzten Tagen massive Änderungen in meinem Skript eingebaut. Dadurch treten jetzt auch diese 2 komischen Restfehlern auf. Weil ich mit pdcwiki fast täglich teste, ist mir das auch schon aufgefallen, aber ich hab mich diesem Problem erstmal noch nicht weiter angenommen. Steht aber auf der To-Do-Liste. Traust du dir mit deinem Bot eigentlich auch zu, z.B. in der deutschen oder englischen weiter aufzuräumen? Danke für deine Hilfe. -- sk 09:37, 2. Apr. 2009 (CEST)
- *freu*, Danke für die Blumen. :-) Allerdings bin ich nicht der Betreiber von unserem Aufräumbot. Ich werde aber Xqt auf diese Diskussion aufmerksam machen. Vielen Grüße -- Marbot 15:17, 3. Apr. 2009 (CEST)
- Hallo Stefan, Marbot hat mich hier her gelotst. Tut mir echt leid, daß mein Bot Dir auf pdc die Testumgebung genommen hat, nachdem nun alles bereinigt ist ;) Ja, mit dem de-wiki wird das schwierig, da die meisten Punkte als kosmetische Änderungen angesehen werden. Da gibt/gab es aber Widerspruch, die Punkte als eigenen Botlauf abzuarbeiten. Das war auf pdc einfacher. Aber bei sonstigen Botläufen kann er das so mitmachen, das gilt auch für andere wikis. Wir können da gerne enger zusammenarbeiten. Gruß --- @xqt 18:58, 3. Apr. 2009 (CEST)
- *freu*, Danke für die Blumen. :-) Allerdings bin ich nicht der Betreiber von unserem Aufräumbot. Ich werde aber Xqt auf diese Diskussion aufmerksam machen. Vielen Grüße -- Marbot 15:17, 3. Apr. 2009 (CEST)
- Mein Code von dem Perl Skript findest du ja auf der Projektseite und die Listen mit den Artikel, in denen Fehler gefunden wurden, auch. Wenn du Fehler entdeckst, die dein Bot nicht abarbeiten kann, die aber international vorkommen können, dann wären das interessante neue Fehler für diese Seite hier. -- sk 21:53, 3. Apr. 2009 (CEST)
desc_enwiki and head_enwiki
At en.wp, I started adding in the descriptions which errors can be fixed by AWB. At first I thought the delay was a feature, but it looks like that just head_enwiki is used and not desc_enwiki. In the meantime, I added (AWB) to the headers. -- User:Docu
- Oups, it is possible, that my script only create the translation page, but not load this page. I will check this at the weekend. -- sk 09:52, 3. Apr. 2009 (CEST)
- About those variables, my understanding is that variables error_nnn_xxxx_script behave like defaults when the language-specific version error_nnn_xxxx_XXwiki is not defined. It seems to me that having both sets of variable (_script and _XXwiki) in each Translate page prevents the script from using system-wide defaults and confuses the translation scheme. Would the script be able to read the default set of variable (..._script ones) from a central (shared) location and ignore their values from individual Translate pages, so that we could simply delete them from our Translate pages and retain only the language-specific set (..._XXwiki ones) ? -- Laddo 199.22.61.2 14:03, 3. Apr. 2009 (CEST)
- @Docu: My script work very well and Laddo is right. Only .._enwiki will be used for the translation, if this is not empty. -- sk 22:07, 3. Apr. 2009 (CEST)
- OK for me. Lots of pressure on English translation! ..._enwiki ones will be better maintained than plain default values (though they will likely contain special text only applicable to English Wikipedia, such as references to WP:AWB that Docu referred to, but that's minor). You will still need to store defaults somewhere in case some ..._enwiki are empty and for new detections. Let us know -- no rush ;) -- Laddo 66.131.214.76 23:50, 3. Apr. 2009 (CEST)
- It might not have worked as in some of the messages I only used "desc_" and not "head_". @Laddo: it takes time to translate this into AWB ;).
- If it helps, I could do the descriptions in a template and split between general description, MoS and wiki markup crossreferences, AWB and other bot suggestions .. -- User:Docu
Tuning variables for detections
Hi Stefan, me again, yet another idea for improvement. I don't know if it's possible, but it would be nice if we could "tune" some detections using variables from the Translate page. For example error 37 (Title with special letters and no DEFAULTSORT) could be set to check less or more characters (as you know it currently checks the first three for all languages). With all the accents in French, it would be better to initially check only the first character, to address more critical cases, but this would not be adequate for other languages. Possibly supporting some variable like error_037_????_frwiki, if feasible, could provide that extra flexibility. -- Laddo 66.131.214.76 00:13, 4. Apr. 2009 (CEST)
- It is no problem to change the error for fr, but is is very difficult to do it with the translation page. At the moment I will fix this by hand in the script. And when frwiki has fixed all then we can change this to two letter and later back to three errors. -- sk 06:23, 4. Apr. 2009 (CEST)
Bad detection of error number 29
Hello,
on french wikipedia the program don't reconize </Gallery> (with first upper case G) as closing tag for <gallery>
An exemple of false result is fr:Žemaičių Kalvarija
Regards
--Hercule 00:24, 4. Apr. 2009 (CEST)
- Wikipedia works like XHTML and there it is not allowed to write a tag with big characters. Please fix this "G". -- sk 06:25, 4. Apr. 2009 (CEST)
New errors
Hello
I have two thing:
- redirects in navigational templates. For example in template like en:Template:NorwegianPrimeMinisters there is probably no redirects - its just a example it should not be links like [[redirect]] it should be [[real name of article|redirect name]]. Thats because when you are on "real name of article" and click on redirect in template you will go by redirect to "real name of article". When template is inserted with rigth links - user can`t click that article.
- redirects of template. Sometimes template are moved - but in articles still remains old definition. In pl.wiki in last time we deleted stubs (for example pl:Szablon:Samoa stub are redirect to pl:Szablon:Stub but "Samoa stub" is still in ~20 articles (for example in pl:Apia).
PMG 21:28, 2. Apr. 2009 (CEST)
- Hello PMG, to the first error: sorry I don't understand this problem. Please describe it better. :-) To the second error: With my script I can't find deleted templates. The other problem is that I need a list of deleted templates for the languages. I think this is not possible. -- sk 17:54, 3. Apr. 2009 (CEST)
(Szablon = Template)
About first: There was a article pl:USS Argonaut (APS-1), and there was hes friend pl:USS Argonaut (SS-166) - redrect. And there was his mother pl:Szablon:Okręty podwodne grupy V. (ok - here story ends because I don`t have skills :P ). When somebody is on USS Argonaut (APS-1) and go down to template he can go to many (6 blue) links. 5 is good -but when somebody click on "Argonaut" he will go where ? To the same article. What it should be ? in template instead [[USS Argonaut (SS-166)|"Argonaut"]] should be [[USS Argonaut (APS-1)|"Argonaut"]] . Then, when we will be in USS Argonaut (APS-1) link in template will be black and nonclicable.
Thats why i need that list of redirects in templates.
About seconds:
I dont know why you need lists of deleted templates. In my opinion you should do something like that:
- in "template namespace" find redirects (for example pl:Szablon:Biografia stub is redirect to pl:Szablon:Stub).
- look that these "redirect template" is use in some main namespace article (for example pl:Eugene_O'Neill)
- list this "main namespace article".
PMG 18:39, 4. Apr. 2009 (CEST)
- Ok, I understand both problems. But I don't see a chance to detect this with my script. The problem is, that my script not work with a database. It scan only one article and search inside this article for errors. My script see only that text, which you see when you click "Edit". What you want need 2 steps: 1.) a scan of all templates and 2.) a scan of all articles. This is not possible with my script. - Or with your example when my script scan the text of article "USS Argonaut (APS-1)" then it find [[USS Argonaut (SS-166)|"Argonaut"]] . But my script has not the knowledge that "USS Argonaut (SS-166)" is a redirect. That is the problem. And it is the same problem with the second error. -- sk 18:58, 4. Apr. 2009 (CEST)
6000 issues solved??
Hi Stefan, according to the output on the toolserver 6000 issues of nl.wikipedia.org are solved. I think something is wrong... Rudolphous 05:52, 2. Apr. 2009 (CEST)
- Shit. Yesterday I have change some things at my test system. There I have deactivate for tests all "old" errors and "daily changes" so that only new articles will be scanned. At the end of my changes I copy my updated script back to the "production system" and forget to activate the procedures. Sorry this is my fault and it is a problem for all language. I will fix this tonight. But you can set the page back to yesterday an fix all open errors. -- sk 09:28, 2. Apr. 2009 (CEST)
- Hi Stefan, no problem. Thanks for your quick answer. Rudolphous 17:47, 2. Apr. 2009 (CEST)
- I think you have to activate for test all errors from april 1'st. And propably all changes made between 1'st and 2'nd April. Today script scaned all changes between 2'nd and 3'd, all new articles. On pl.wiki there was 6100 erros on 1'st, 210 errors on 2'nd and 840 today (over 5k off in two days - it's impossible). Malarz pl 14:21, 3. Apr. 2009 (CEST)
- In some languages I start an dump scan. But I have not the time to do this in all languages. In some languages I had a backup of the error list. -- sk 10:59, 5. Apr. 2009 (CEST)
Repeated articles in the same list of 50
Hi Stefan, I noticed that sometimes the same detection appears twice in a given list of 50 -- I guess this is for articles recently changed that were also part of the Big list from the previous day. -- Laddo 66.131.214.76 23:55, 3. Apr. 2009 (CEST)
- I know this problem. It is in all errors and I hope I can fix this at the weekend. -- sk 06:14, 4. Apr. 2009 (CEST)
- , Now the script will not give an error twice. -- Oksk 10:58, 5. Apr. 2009 (CEST)
Links in namespace to another wikis
Hello
I don`t know exactly how its look on another wikis but on pl.wiki there should be not links to another wiki in namespace. What exactly i want ? I want that script will be shows where is on article in main namespace is something like [[:en:blablabla]] nor [http://en.wikipedia.org/wiki/blablabla blablabla]. So far i found two ways to find this - one, but there is many nonmainspace links. And second but this still dont fit.
For example in pl:Mikrograwitacja there is <ref> with "source" to en.wiki - but as i know linking to another wikis and sayng that another wiki was a reliable source is wrong.
--PMG 18:53, 4. Apr. 2009 (CEST)
- I had the same idea, but not the time to programme this. :-) At the moment I have a list of all links in one article. So I can easy check for [[:en:blablabla]] nor [http://en.wikipedia.org/wiki/blablabla blablabla]. I will try it. -- sk 19:17, 4. Apr. 2009 (CEST)
- I think we did this a couple of years ago in en.wp. In the meantime, this is likely to dig up a lot of stub and other article maintenance templates. -- User:Docu
- , I have insert error 68. The script now find [[:en:blablabla]]. But at the moment I can't detect the other error. Maybe in the future. -- Oksk 20:34, 5. Apr. 2009 (CEST)
Dot after ref
Hello
I have a question. You shows as possible error when dot is after ref (for example "<ref> blablabla </ref>. ") because its error in english and probably another languages. But in polish that is "what should be". So it possble to generate for polish wiki list of ".<ref> blablabla </ref> " errors ? PMG 15:56, 5. Apr. 2009 (CEST)
- The same question was in frwiki. I will insert this! :-) -- sk 16:00, 5. Apr. 2009 (CEST)
- , new error 67. -- Oksk 20:05, 5. Apr. 2009 (CEST)
error 66
I think, that <small> tag in image description is possible when used with thumb option ex: pl:Bielsko-Biała#Przemysł (Zdjęcie z roku 2007 means Picture taken in 2007), but this tag in pictures wo thumb option is bed idea. I don't know how is on other wikis, but i plwiki can man see the difference. Could you divide this error in two parts. Then it will be possibilty to deactivate part of errors. Malarz pl 15:05, 6. Apr. 2009 (CEST)
- What did you mean with two parts? One: where all is small and the second where is only a part is small? IMHO from the side of the typography it is "bad practise" to use two different font sizes in one image description. See for example Karl-Marx-Haus: the image with the description "Ausstellungsräume im Jahr 2003" means "Collection in the year 2003". I take this image 2003 before there was a big rebuild in this collection. This information is important for this image. If this not important enough than this information can stand at the page of the image description. But please not as small text. A other way is to use a () like "Painting of Rembrandt (1655)". Please use so few style-tags inside the articles as possible. Motto: Less is more! -- sk 21:44, 6. Apr. 2009 (CEST)
- 1'st part: <small> with thumb or in gallery.
- 2'nd part: <small> in image, where description is used for "alt" parameter only. Malarz pl 22:14, 6. Apr. 2009 (CEST)
Link to other language
Hello
Its working. Well ... its working too good :>. Its possible to unfilter [[:XX:File or [[:XX:Image ? Because that search shows links to en.wiki to images what are on fair use. For example pl:Greatest Hits 1970-1978 has a "Okładka" (okładka = cover) links to file on en.wiki (because on pl.wiki there is no Fair Use). And i am thinking that this can be on many wiki. So in my opinion it will be better to not showing/gathering information about "File" or "Image".
Second thing - its possible to get full scan pl.wiki with some errors ? For example with "Category duplication", "Interwiki before last headline", "Category before last headline", "headline with additional bold", "headline starts with one =", "headline with : at the end". PMG 17:56, 6. Apr. 2009 (CEST)
- Hello PMG, thanks. I think it is the next consequent step to eliminate this FairUse-Images in plwiki. In German we not allowed FairUse, because we want free images. At the moment I will not filter this. Maybe if other languages also say we want this too. But today I will first programme other important things. :-) Tomorrow me script will check the full last dump. I hope this help. Please send my best greetings to the polish contributor. -- sk 21:27, 6. Apr. 2009 (CEST)
Picture without description
Inside a template this is often not an error. See for example no:Andorras herrelandslag i fotball, no:Riksvei 13 and many others. Regards, -- BjørnN 19:10, 6. Apr. 2009 (CEST)
- It's still an error. But minor. You can have better comprehension of an image by adding a proper alternative text ;) Loreleil 19:53, 6. Apr. 2009 (CEST)
BjørnN, In both examples you are NOT providing the "alt text" of the images, and this is bad. Please add the description like this:
|image = [[Image:File name.jpg|200px|Description.]]
There are different ways you can write a template; for example you can have something like this:
|image = File name.jpg|Description.
On it.wiki we use also a parameter to add the alt text. The template syntax will be very complex:
|{{#if:{{{image|}}}|[[Image:{{{file name}}}|200px|{{{description}}}]]<br />{{{description}}}}}
The result of the last example can be seen here: it:MiG-25 (source code it:Template:Aereo militare)
In other words: if you stay over the image with your mouse you should always be able to read the alt text.
Stefan, please, be sure this last example can't produce false positives, I'm not sure. --Red Power 20:06, 6. Apr. 2009 (CEST)
- @Red Power: I can only detect images like "[[Image:File name.jpg|200px|Description]]. So this image in Mig-25 I can't detect. -- sk 21:14, 6. Apr. 2009 (CEST)
- I see what you mean, thanks a lot. Regards, -- BjørnN 21:23, 6. Apr. 2009 (CEST)
Hellip
On the Dutch wikipedia the following 6 articles produce a Hellip error: http://nl.wikipedia.org/wiki/Wikipedia:Wikiproject/Check_Wikipedia#Code_055:_Hellip Question: What is wrong with them? Best regards, Rudolphous 21:05, 6. Apr. 2009 (CEST)
- This is right, the error has change. Please change the description. Now this error detect double small-Tags. Hellip was merge in error 11. (See the News) -- sk 21:15, 6. Apr. 2009 (CEST)
koordinaten
feature-request: kannst du einen test bauen, ob die Vorlage:Coordinate mehr als einmal auftaucht? oder auch gleich für alle vorlagen aus der Kategorie:Vorlage mit Koordinate - die ist auch gut mit interwikilinks versorgt. --AwOc 21:19, 6. Apr. 2009 (CEST)
- Verstehe ich jetzt nicht ganz. Mein Skript scannt derzeit jeden Artikel nach allen ihm bekannten Koordinatenvorlagen. Und nutzt diese z.B. für diese Files. Wozu genau brauchst du das? Am einfachsten schaust du in diesem File ob ein Artikel mehrfach vorkommt. Das werden sicherlich einige sein. z.B. Pesterwitz. -- sk 21:32, 6. Apr. 2009 (CEST)
- naja, ich hab in ne handvoll artikel aus 'koordinaten fehlen' geschaut, und gleich bei mehreren waren schon koordinaten drin die sich dann unschön mit dem 'hilf mit'-hinweis überlagert haben. wie in KZ_Neustadt, Alster und in Schlossbrücke (Berlin-Charlottenburg) (hier überlagerten sich zwei 'koordinaten fehlen'). das ist nicht schön, und wo du eh die artikel scannst... über die textdatei geht das sicher auch, sehr benutzerfreundlich ist das aber erstmal nicht. --AwOc 21:56, 6. Apr. 2009 (CEST)
- Ist alles noch im Aufbau. Ich bitte um Nachsicht. :-) Deine Idee, ist eigentlich klever, das kann ich bei nächster gelegenheit mal mit einbauen. Dann muss man das nicht mehr von Hand suchen. -- sk 22:14, 6. Apr. 2009 (CEST)
- nachsicht sowieso, erst recht wenn ich mir die versionsgeschichte dieser seite anschaue. hab nur grade festgestellt, dass es nicht ganz so einfach ist, weil die koordinaten auch im fließtext stehen können, wie in Baden-Württemberg. und das ist natürlich sinnvoll --AwOc 22:49, 6. Apr. 2009 (CEST)
- Ist alles noch im Aufbau. Ich bitte um Nachsicht. :-) Deine Idee, ist eigentlich klever, das kann ich bei nächster gelegenheit mal mit einbauen. Dann muss man das nicht mehr von Hand suchen. -- sk 22:14, 6. Apr. 2009 (CEST)
- naja, ich hab in ne handvoll artikel aus 'koordinaten fehlen' geschaut, und gleich bei mehreren waren schon koordinaten drin die sich dann unschön mit dem 'hilf mit'-hinweis überlagert haben. wie in KZ_Neustadt, Alster und in Schlossbrücke (Berlin-Charlottenburg) (hier überlagerten sich zwei 'koordinaten fehlen'). das ist nicht schön, und wo du eh die artikel scannst... über die textdatei geht das sicher auch, sehr benutzerfreundlich ist das aber erstmal nicht. --AwOc 21:56, 6. Apr. 2009 (CEST)
Title with special letters and no DEFAULTSORT
Right, the error Title with special letters and no DEFAULTSORT. On en-wiki we have Template:Lifetime which provides a DEFAULTSORT to any article that uses it. Therefore, an article with diacritics in the title and a lifetime template is correct. I've gone round and round trying to get the correct logic into AWB. I finally have. Would you update the CheckWikipedia logic please. Thanks Rjwilmsi 00:23, 7. Apr. 2009 (CEST)
- I will look at this in the next days. -- sk 08:18, 7. Apr. 2009 (CEST)
New detection ?
Possible new detections when two references <ref>...</ref> follow each other. In languages like English or Deutsch, references appear like « [1] », but in French (and possibly other languages), they appear as « 1 ». There are two possible problems when there are two in a row:
a) The convention is not to have any space in-between: neither « [1] [2] » nor « 1 2 »;
b) In French, we additionnally require a special template {{,}} in-between so that the two note numbers appear as « 1,2 » instead of « 12 ».
-- Laddo 66.131.214.76 21:58, 5. Apr. 2009 (CEST)
- In my opinion, fr.wiki is doing a BIG mistake. People with disability can have problems to point and click those little numbers! Sometimes you can't use a mouse, but only special devices. Please, ask your community to use the wiki standard. Accessibility have always the priority. Thanks. --Red Power 20:28, 6. Apr. 2009 (CEST)
- These are standard typographic rules of the language; Wikipedia just followed standards used for books. -- Laddo 66.131.214.76 05:22, 7. Apr. 2009 (CEST)
- Italian books use this1 kind of reference too; unfortunately Wikipedia is online, it's not a book! You can set your CSS for the "version imprimable" (Print.css) in order to adopt the typographic French rules only when someone print the article on paper. Explain this to the others.
Regarding your point a) I agree, there should not be a white space between references, like[1] [2] this, or between the reference and the last word, like [1] this. But those are minor errors IMHO. --Red Power 16:58, 7. Apr. 2009 (CEST)
- Italian books use this1 kind of reference too; unfortunately Wikipedia is online, it's not a book! You can set your CSS for the "version imprimable" (Print.css) in order to adopt the typographic French rules only when someone print the article on paper. Explain this to the others.
Problem with error 32, double pipe in one link
Hi! I was at loss with this one at fiwiki, since I can't see teh double pipe. Is this a bug or am I just blind? fi:Eelis Lyytikäinen - [[Jalkaväkirykmentti 55 (jatkosota)|Jalkaväkirykmentti 55:n]] 3. pataljoonaan, Thanks for your help, --Albval 23:06, 7. Apr. 2009 (CEST)
- In this section is the problem. [[Ryhmä Oinonen|Ryhmä Oinoseen. Myöhemmin vuonna 1941 hänet siirrettiin Viipurin huoltokeskuksen päälliköksi. Vuonna 1942 hänet siirrettiin intendentiksi V Armeijakunnan esikuntaan, josta hänet siirrettiin edelleen samana vuonna hänet siirrettiin toimistoupseeriksi Kotijoukkojen esikunnan yleiselle huolto-osastolle. Vielä vuonna 1942 hänet kerittiin siirtämään [[Jalkaväkirykmentti 55 (jatkosota)|Jalkaväkirykmentti 55:n]] - My script think this section is all the link. And it end with "...55:n]]" -- sk 08:28, 8. Apr. 2009 (CEST)
- Ah, so I was blind. Thanks! --Albval 08:31, 8. Apr. 2009 (CEST)
Better output for detection 002
Hi Stefan, in detection 2 (faulty break syntax) your output often does not show the erroneous break -- since it often lies at the end of the paragraph, but only its beginning appears in the report. If possible, use the same scheme as for detection 054 (Break in list), displaying beginning and terminating letters before the faulty break with « … » in-between. Thanks - -- Laddo 66.131.214.76 23:55, 7. Apr. 2009 (CEST)
- I will insert this. No problem. -- sk 08:30, 8. Apr. 2009 (CEST)
Two sizes of lists?
I understand that you have quite a bit on your hands right now with this, but I just thought I'd mention this for if/whenever you have the chance. If you're able to, would you be able to make a separate page which lists up to 200 pages for each error, rather than the 50 (for ENwiki). The 50 is good to keep the wiki page from getting to large and to ensure that it saves without having any errors, but I can go through some of the errors (like "Headline with bold") really quickly, and it would be nice to have a longer list to use selectively for different errors. I'm not sure if this would be easy to do or not, but it would be much appreciated if it isn't too much trouble. Thanks! -Drilnoth (Talk) 05:09, 3. Apr. 2009 (CEST)
- I'm not sure if I'm right about this, but there seems to be a textfile here listing all the error30's, not just the 50 in the limited updates. Perhaps it is possible to create such lists for all the errors so that the eager beavers like Drilnoth can access them there? --Helt 06:34, 3. Apr. 2009 (CEST)
- In the same folder you found a list with all errors. There you find all articles with error. If you want only error 44 "Headline with bold" then check the part behind the article. Is there a 44, then my script found there this problem. The special list of error 30 and 37 will only updated after dump-scan (not daily). Maybe at the weekend I have time to programme this in my script, that after every dump scan the script create a list for every error, like 30/37. -- sk 09:49, 3. Apr. 2009 (CEST)
- That would be great... having the list of all errors by article is nice, but it would be really useful if it was formatted in a similar way to the main list (by problem, in a table format, rather than listed by article). Thanks! -Drilnoth (Talk) 17:27, 3. Apr. 2009 (CEST)
- Thanks! I have one other quick question if it wouldn't be too much trouble... could the lists use wikilinks around each article name? I ask because then it is easier to import the list for use by AutoWikiBrowser to make fixing the problems easier. Thanks again! -Drilnoth (Talk) 21:20, 7. Apr. 2009 (CEST)
- Actually, nevermind. I just figured out how to use .txt files to accomplish this. -Drilnoth (Talk) 16:06, 8. Apr. 2009 (CEST)
- Thanks! I have one other quick question if it wouldn't be too much trouble... could the lists use wikilinks around each article name? I ask because then it is easier to import the list for use by AutoWikiBrowser to make fixing the problems easier. Thanks again! -Drilnoth (Talk) 21:20, 7. Apr. 2009 (CEST)
- That would be great... having the list of all errors by article is nice, but it would be really useful if it was formatted in a similar way to the main list (by problem, in a table format, rather than listed by article). Thanks! -Drilnoth (Talk) 17:27, 3. Apr. 2009 (CEST)
- In the same folder you found a list with all errors. There you find all articles with error. If you want only error 44 "Headline with bold" then check the part behind the article. Is there a 44, then my script found there this problem. The special list of error 30 and 37 will only updated after dump-scan (not daily). Maybe at the weekend I have time to programme this in my script, that after every dump scan the script create a list for every error, like 30/37. -- sk 09:49, 3. Apr. 2009 (CEST)
Error list.txt file
Hi, Stefan: I think it could be a great idea to sort the text file **wiki_error_list.txt by number of error instead of alphabetically. So, in this way, I can correct all errors at only one time, not by 50 and 50 every day. I've tried to separate the full error list, but I couldn't do it − it's very large, about 2 Mb.
Another option is making one complete list for each error (maybe this is easier for you, I don't know), because in the wiki-generated-page there are only 50 errors to correct. I hope you can add to your script one of those things if possible. Regards, Muro de Aguas 20:43, 3. Apr. 2009 (CEST)
- I will try to create a list for every error. -- sk 06:14, 4. Apr. 2009 (CEST)
- Thank you very, very much for creating the lists! Muro de Aguas 17:57, 8. Apr. 2009 (CEST)
Would you provide us with an estimate? -- User:Docu
- Ok. -- sk 08:16, 7. Apr. 2009 (CEST)
- Actually Stefan, I still find obscure how the counts of detections fluctuate in the Summary that appears at the top: in French wiki, version of April 7, the counts of many detections decreased at a, say, suspect speed... Can you clarify how you maintain this count daily for each detection type ? It must be tricky since in general you do not re-scan all articles from the global error list. -- Laddo 199.22.61.2 13:58, 7. Apr. 2009 (CEST)
- I see this today in frwiki. And I have no idea. It is possible, that someone yesterday make a bot run? AWB or so? Yesterday I have only change in the script the output: "List of all articles with error XXX". This has nothing to do with statistic. See dewiki, there we have today normal change and not so big change like in frwiki. -- sk
- I think I have a better idea. For 2 or 3 days I make this full dump scan. There my script found many errors. But many of this errors where at this time repaired in the frwiki. When my script start the first live scan, he find for every error the first 50 errors from the new articles and changes. If this first 50 errors repaired he search for more and he find not more in the next 1000-2000. So after this full dump scan you see a very big number, but in the next days this number will be lower. So this is the normal performance after a not normal full dump scan. :-) -- sk 17:33, 7. Apr. 2009 (CEST)
- Does that mean that if we fix items 51 to 1000 for one error based on the list on toolserver, this wouldn't display any changes? -- User:Docu
- I think I have a better idea. For 2 or 3 days I make this full dump scan. There my script found many errors. But many of this errors where at this time repaired in the frwiki. When my script start the first live scan, he find for every error the first 50 errors from the new articles and changes. If this first 50 errors repaired he search for more and he find not more in the next 1000-2000. So after this full dump scan you see a very big number, but in the next days this number will be lower. So this is the normal performance after a not normal full dump scan. :-) -- sk 17:33, 7. Apr. 2009 (CEST)
I will describe it again. When the script scan the dump, it found maybe 10000 times error 77. After this dump scan my script scan the live wikipedia. First it scan new articles and change articles. Maybe there it found 20 articles with error 77. Then it scan 30 articles with error 77 from the big list. So that in the output are 50 articles. Next day this proceeding is again. - When I scan the wikipedia dump again maybe 1 month later, I get again the 10000 articles with errors in my big list. Maybe in the next live scan the script found 50 articles with error 77 without the big list. So at the big list are 10000 errors again. After 2 days the first 50 repaired and now the script check the first of 10000. Now it found that 2500 in the last month had be repaired by users. So the number of errors go down in own night 2500 articles down. This is the reason for the big change tonight in frwiki. I have scan this dump 2 days ago. -- sk 21:20, 7. Apr. 2009 (CEST)
- That's it ! Scanning a dump that is one-month-old will initially report errors that were corrected since then; however the first time that list is used to complete an individual error list, the script will naturally scan the list in the same order as these articles appeared in daily reports of 50 over the last month. So in a single day (but not necessarily the same day for each detection type), the script will filter out all errors already corrected for that type, causing a sudden drop in detection counts, before returning to normal daily changes on the next day. This is a somewhat confusing side-effect of after-the-fact dump scans. Now that the limit of 40000 detections is no longer an issue, it will likelly never ever be necessary to scan a dump a second time, thanks to your recent processing improvements! Good to know! Thanks for the clear answer. -- Laddo 66.131.214.76 23:24, 7. Apr. 2009 (CEST)
- Back to my question: on the project page, the report is limited to 50 items. Presumably the 50 items that appear first in the list. Now, if one works off the list on toolserver (items #1-#50 and items #51 through #1000), will it have any effect if items #51 to #1000 are cleared? If on the next day one just scans for 50 errors, there will be 50 errors just by going through items #1 through #50 (this is unrelated to the special scan for fr.wp) -- User:Docu
- Yes and no! It is a little bit different. For example the error 77 has 2000 articles. The normal way (without new and changes): You repair #1-#50 of error 77. Then my script scan next day for error 77 the #1-#50 and find no error 77. So it will check the next 50 articles #51-#100. And find new 50 errors. So the number of all errors of 77 will go down 1950. - Now the real world way (with new and changes): The error 77 was found in 2000 articles. You fix on day #1-#50. Next day my script fist scan "new" and "change" article (but not all). 1.)Maybe it found in this articles new 50 errors, then your 50 fixed articles will not scan at the next day. The number will grow up to 2050. 2.) Maybe it found only 10 new errors, then my script will check the next 40 in the big list #1-#40 and find no new errors (you have fix #1-#50), then it scan the next 40: #41-#80 and find 30 new error, so my script now 10+30 = 40 errors, and will scan #81-#90 for next 10 errors. After this my script know 50 articles with error 77. I hope this help to understand. It is a little bit tricky, but I will not scan all articles every day. -- sk 08:50, 8. Apr. 2009 (CEST)
- I understand. I was asking as my bot partially fixed about 1000 of check #7 in en.wp, but this didn't seem to have much effect on the stats (currently 3172). -- User:Docu
- Stefan you could implement a small improvement to unconditionnaly scan articles from these error lists until you find a first article still bearing the error. This way you could better report the daily progress of corrections. -- Laddo 199.22.57.2 13:55, 8. Apr. 2009 (CEST)
- The problem is that the bot can only fix some of the errors. Easily the first 50 errors are such that can't be fixed by the bot. Thus, the script wont rescan all articles. -- User:Docu
- Stefan you could implement a small improvement to unconditionnaly scan articles from these error lists until you find a first article still bearing the error. This way you could better report the daily progress of corrections. -- Laddo 199.22.57.2 13:55, 8. Apr. 2009 (CEST)
Sorry, but I have no chance to change this. The script is at the moment so complex, so that I am very happy, that it work. I will work in the future not so much at the basic of the script. I will add some new errors or fix little problems. I think the correct number of an error is not so important. If a user know there more then 300 errors outstanding then this is enough. It is not important, that it is at the moment 312 or 321. He know that this error is a problem and he can fix this. If you want work with a tool like AWB, than you can use this list. This list is also only a list of articles with "maybe" an error. It is not 100% correct. Only if I scan all articles every day then we get the correct number. -- sk 22:00, 8. Apr. 2009 (CEST)
ISBN wrong
Heute lautet die zweite Fehlermeldung:
„Anthropologie 3-89104-413-5 - Wrong Length 11 and not 10 or 13 “
Zähle ich falsch oder das Skript? Ich sehe vier Zahlengruppen zu 1 + 5 + 3 + 1 = 10 Ziffern. Das setzt sich bei vielen weiteren Fehlermeldungen fort. --Andrsvoss 11:16, 10. Apr. 2009 (CEST)
- , erledigt. Das Skript hatte ein Problem mit doppelten Leerzeichen oder ähnlichem nach der ISBN-Nummer. Wird beim nächsten Durchlauf nicht mehr erscheinen. -- Oksk 11:24, 10. Apr. 2009 (CEST)
ISBN (x und X)
Hallo Stefan, wenn ich den Text im Perl-Skript if ($character =~ /[ 0-9X\-]/)
richtig deute, wird ein klein geschriebenes x nicht gefunden. Soll das so sein? Wenn ja, könntest du vielleicht ausdrücklich darauf hinweisen, sonst sucht man ausführlich in irgendwelchen Katalogen, nur um aus x ein X zu machen :) --MaEr 16:35, 11. Apr. 2009 (CEST)
- Kann ich machen. -- sk 18:38, 11. Apr. 2009 (CEST)
- Danke! --MaEr 18:30, 12. Apr. 2009 (CEST)
ISBN reporting suspect
Hi Stefan, this new detection reports "1 - wrong length 1 and not 10 or 13" on article fr:Mâyâ where the only occurrence of string "ISBN" is in the following text : «..., ISBN 1−55939−007−7 ». The script seems to report cases where there are not even curly brackets ({{..}}) around the string. -- Laddo 66.131.214.76 21:12, 11. Apr. 2009 (CEST)
- The problem is the "−" it is not an normal "-" (Minus). I have change this in the article. Now the ISBN has a link. --sk 22:14, 11. Apr. 2009 (CEST)
- Wow! I did not even know that the link could exist without even using a template... Thanks -- Laddo 66.131.214.76 14:40, 12. Apr. 2009 (CEST)
ISBN false positive
I found on the Dutch wikipedia some false alarms:
- http://nl.wikipedia.org/wiki/Tramlijn_Ede_-_Wageningen
- http://nl.wikipedia.org/wiki/Michel_Schooyans
- http://nl.wikipedia.org/wiki/Tulku
- http://nl.wikipedia.org/wiki/VARA_gezinsencyclopedie
Rudolphous 08:47, 12. Apr. 2009 (CEST)
- Tramlijn_Ede_-_Wageningen - ISBN-nummer
- Tulku - ISBN 978 90 04 12766 0 (wrong ISBN)
- Michel_Schooyans - [http://www.dehoniane.it/edb/cat_dettaglio.php?ISBN=24109]
- VARA_gezinsencyclopedie - [https://www5.cbonline.nl/pls/apexcop/f?p=130:1010:401581703141772 ISBN-bureau]
Thanks for this infos. I have check this. Only in Tulku it is a real error. I will change my script for the other ISBN-problems. -- sk 11:52, 12. Apr. 2009 (CEST)
- , I change the script. -- Oksk 16:12, 12. Apr. 2009 (CEST)
- Great! I'm busy with fixing all the isbn errors on the Dutch wikipedia. Rudolphous 19:22, 12. Apr. 2009 (CEST)
- Hi Stefan, The new script triggers "ISBN wrong position of X" on ISBN 906550558X, 90 805348 2 X, etc. I think that's not correct. Rudolphous 20:44, 12. Apr. 2009 (CEST)
- Great! I'm busy with fixing all the isbn errors on the Dutch wikipedia. Rudolphous 19:22, 12. Apr. 2009 (CEST)
- , My script don't work correct at this error. Now I have change this. -- Oksk 21:01, 12. Apr. 2009 (CEST)
False positives in ndash/mdash detection
I have found quite a few false positives for error 50. For instance, ca:Anur or ca:Ariel Sharon --Joancreus 16:31, 4. Apr. 2009 (CEST)
- Sorry, I don't found the false detection in the script output from (yesterday or today) and also nothing in this both article. Where did you find this information. Please give me a link to the version of this page, where do you find this false positives. Thanks. -- sk 19:09, 4. Apr. 2009 (CEST)
- This page used to have ndash tags, but it had been fixed a few days before. On April 4th, this version was on, but it still appeared on your list. I think there may be some delay in the detection of fixes. Right now, I keep getting "Reference with punctuation" errors (#61) in articles I fixed a couple of days ago. See, for instance, ca:Abd-Al·lah (nom) or ca:A Saucerful of Secrets. They are today in the list, but the problem was removed on April 11th, I believe.--Joancreus 11:31, 13. Apr. 2009 (CEST)
- This is right. The script check only a part of all articles in the list daily. If nothing was change then only 50 articles will be scanned. The list will all errors is not 100%. It is only the List of the last dump and last scans. If I want 100% I ust scan all articles every day and this is not possible. -- sk 16:58, 14. Apr. 2009 (CEST)
ISBN wrong length
Hello, in this situation : ISBN 222713951X ISBN 978-2227139510 seen here fr:Tribus musulmanes et juives de Yathrib, your script say " ISBN wrong length 222713951X 978-2227139510 || 23" Drongou 11:42, 13. Apr. 2009 (CEST)
- Today I have change my script for detection of ISBN complete. Please wait for the next scan tomorrow. If there is this problem again, then tell this here. Thanks for your info. -- sk 19:33, 13. Apr. 2009 (CEST)
- Thanks OkDrongou 20:58, 14. Apr. 2009 (CEST)
ISBN information
hi Stefen,
On french wiki page fr:Sociologie du corps are a lot of ISBNs. 4 of them are marked "wrong checksum"(error 69), but I don't know which ones : table looks like this :
Sociologie du corps | -10 |
Sociologie du corps | -13 |
Sociologie du corps | -10 |
Sociologie du corps | -13 |
Is it possible to add this information on detection summary ?
Al1 18:08, 14. Apr. 2009 (CEST)
- Search for "ISBN-13:" or "ISBN-10:". This should be replace by "ISBN 1234567890" or "ISBN 1234567890123". -- sk 20:17, 14. Apr. 2009 (CEST)
- There are instances where the use of "ISBN-10" or "ISBN-13" would be valid, mostly when talking specifically about one or the other. Does this check currently try to identify and exclude these? —Dinoguy1000 (diskussion • Beiträge • @en) 20:35, 14. Apr. 2009 (CEST)
- I think the use of "ISBN-10" or "ISBN-13" is't ok. If I use only "ISBN" then the number get a link and I can go to the page "Special:Booksearch". This is what we want. --sk 08:23, 15. Apr. 2009 (CEST)
- Not quite what I was talking about. I was referring to situations in prose like "The ISBN-10 check digit is calculated by...". Does your bot try to identify and exclude these types of usages, specifically? —Dinoguy1000 (diskussion • Beiträge • @en) 19:26, 15. Apr. 2009 (CEST)
- Ok, now I understand. No, my script can't identify this. But I think this is only a problem for the article ISBN. Is this a problem in frwiki? -- sk 21:55, 15. Apr. 2009 (CEST)
Error 006 issue - huwiki
Hi! We have lots of false positives with error "special characters in DEFAULTSORT" in huwiki (example: hu:Csabb). We use the tilde for special Hungarian characters (I took a quick look in the script source and saw some checks for this, but these articles are still marked with error 006). Actually, the legal characters for DEFAULTSORT in huwiki are numbers, latin chars, tilde and space. Can you take a look, please. Thanks in advance, and keep up the good work, your tool is invaluable! Mami 15:40, 15. Apr. 2009 (CEST)
- Hi, this week I have not the time to fix this. But next week I will try it. -- sk 21:52, 15. Apr. 2009 (CEST)
Check #7 (Headlines start with three "=")
[1] lists en:2008 Estoril Superleague Formula round (scanned version) what appears to be a false positive. - User:Docu
- Another one here: en:2008 Vallelunga Superleague Formula round (scanned version) -- User:Docu
- For first see this change. -- sk 21:48, 15. Apr. 2009 (CEST)
- Second is the same. -- sk 21:49, 15. Apr. 2009 (CEST)
- Thanks. I didn't quite look for that. -- User:Docu
Pages renamed in Hungarian Wikipedia
Hi! I have renamed hu:Wikipédia:WikiProject Check Wikipedia to hu:Wikipédia:Ellenőrzőműhely and hu:Wikipédia:WikiProject Check Wikipedia/Translation to hu:Wikipédia:Ellenőrzőműhely/Fordítás to have page names in Hungarian instead of English. Can you please change your script to use the new names (old ones are still there as redirects). Thanks in advance, Mami 14:34, 15. Apr. 2009 (CEST)
- Thanks for this info, I will change the script in the next time. I think it work also with redirects. -- sk 21:53, 15. Apr. 2009 (CEST)
- . -- Oksk 21:08, 16. Apr. 2009 (CEST)
Error 037 : exclude chinese and japanese characters
Hi Stefan, all articles with chinese and japanese characters at the beginning of their titles, like 字 (sinogramme), are reported. See the long list at the end of List of all articles with error 037. Please exclude from this detection all Unicode characters between U+4E00 (&#
) and U+4FFF (x4E00;
&#
), according to this table. Thanks ;) -- Laddo 66.131.214.76 04:48, 16. Apr. 2009 (CEST)
x4FFF;
- Or you may elect to only include roman accented characters: according to tables from fr:Table des caractères Unicode (0000-0FFF), characters potentially causing problem with accents seem restricted to ranges [U+00C0;U+017E], [U+01C4;U+024F] and [U+0300;U+036F]. -- Laddo 199.22.57.2 13:53, 16. Apr. 2009 (CEST)
Hungarian translation
Hi! The Hungarian translation has been moved to hu:Wikipédia:Ellenőrzőműhely/Fordítás (fordítás means translation in Hungarian). Could you correct it, so that the script would find it? Thanks - Matthew (84.1.171.193 17:38, 16. Apr. 2009 (CEST))
- , I have change the script. -- Oksk 21:03, 16. Apr. 2009 (CEST)
Plainlinks in article namespace
Hi. There isn't really a valid reason to use <span class="plainlinks"> in article namespace. Maybe it's worth scanning for this, it would find articles like en:Preco (this version). -- User:Docu
- What make this "plainlinks"? I have never seen this before! -- sk 11:03, 17. Apr. 2009 (CEST)
- There is no icon for externallinks inside class plainlinks. Malarz pl 11:29, 17. Apr. 2009 (CEST)
- Yes, it's defined in en:MediaWiki:Common.css. It makes appear external links like internal ones. It's generally used in templates. -- User:Docu
- en:Special:Search/plainlinks gives some result, but I doubt it's exhaustive. -- User:Docu
DEFAULTSORT wikimedia's bug
Hi stefen, I experiment some problems with the DEFAULTSORT and I think that your script may detect them (it already detects special characters...) : when a DEFAULTSORT begins with [a-z] : {{DEFAULTSORT:maisons d'edition francaises}}
The result is here : fr:Catégorie:Liste d'entreprises par secteur. m and M are not the same ... (I will update DEFAULTSORTs when you have read my message)
I think a good detection would be : "defaultsort with [a-z] as first character"
Al1 12:05, 17. Apr. 2009 (CEST)
- This is right, but is it possible that sometime this is ok? I mean this can be different in the languages. But at the moment I don't have an example. -- sk 13:15, 17. Apr. 2009 (CEST)
Error 056 gives lots of false positives
Hi Stefan, detection 056 (ASCII Art) suddenly reported 236 new errors in frwiki (see here), and over 40 of the 50 errors listed report this:
fr:Liste de ponts d'Espagne | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Lièvre | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:M71 | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Matrice nulle | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Mer Morte | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Mikoyan-Gourevitch MiG-19 Farmer | …econstruit=> 5 000|équipage= 1|nombredemoteur=2|typedemoteur=[[turboréacteur|Turboréacteurs]]… |
fr:Musée Dupleix de Landrecies | …: Database->reportConnectionError('No working slav...')#1… |
fr:Médaille d’Or Heinrich Tessenow | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Neue Slowenische Kunst | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Novaïa Gazeta | …adBalancer->reportConnectionError(Object(Database))#1… |
fr:Ordre de l'Amitié des peuples | …adBalancer->reportConnectionError(Object(Database))#1… |
Such text does not appear in articles; possibly this resulted from... some connection error. Bye -- Laddo 199.22.57.2 13:33, 17. Apr. 2009 (CEST)
- This is a connection error. I can not fix this. Tomorrow this will be deleted. -- sk 13:39, 17. Apr. 2009 (CEST)
- OK. Thanks -- Laddo 199.22.57.2 13:41, 17. Apr. 2009 (CEST)
Kaputter Link
Hi Stefan, Oben unter What can you do? wird ein kaputter Link ausgegeben.
** The script creates a new error page at the toolserver every day. Please copy and paste that [http://toolserver.org/~sk/checkwiki//_output_for_wikipedia.txt page at the toolserver] to this page here. Attention: That page is a UTF-8 document. In case your browser cannot display the file in UTF-8 you can copy it into a text editor (for example: Notepad++) and convert it to UTF-8.
Er ist zwar unten nach der Tabelle nochmals und auch funktionstüchtig vorhanden, es könnte dennoch nicht schaden den Link zu korrigieren.
Viele Grüße -- Marbot 23:48, 17. Apr. 2009 (CEST)
Apropos, in ca. einer Woche sind wir auf pdc mit den aktuellen Fehlern durch. Wir freuen uns daher auf weitere Verbesserungsvorschläge. :-))
- Thanks for this input. I will fix this in the next week. -- sk 21:00, 18. Apr. 2009 (CEST)
Error 069
Hello Stefan. ja:Help:ISBNのリンク (Japanese equivalent of en:Wikipedia:ISBN) recommends putting down both ISBN-10 and ISBN-13 like "ISBN 4-582-85207-6 (ISBN-13 978-4-582-85207-3)" when the referred book has both numbers. There are some cases this is helpful. For example, some library services record ISBN-10 only if the book was published before some date and readers of Wikipedia articles can't use such services if the articles don't have ISBN-10. Is it possible to accept prefixes "ISBN-10" and "ISBN-13"? --fryed-peach 18:47, 18. Apr. 2009 (CEST)
- Write "ISBN 4-582-85207-6 (ISBN 978-4-582-85207-3)" and both ISBN had a link and the script has no problem. It is very difficult or impossible to know when this ISBN-10/-13 is ok and when it is wrong. I think it is better to write it everywhere only with "ISBN". I have no idea, how to fix this special problem. I think at the moment we found with this "ISBN-10" and "ISBN-13"-detection many ISBN-problems. -- sk 21:21, 18. Apr. 2009 (CEST)
- Thank you for your reply. We will just ignore those false positives at this time. I, personally, think it too redundant to write double ISBN with both links like "ISBN 4-582-85207-6 (ISBN 978-4-582-85207-3)". --fryed-peach 14:58, 19. Apr. 2009 (CEST)
Check 57
[2] lists two false positives (en:Earth radius#Equatorial radius:.C2.A0.C2.A0a). These are pages with <math> tags in the title. Minor really. -- User:Docu
- , I fix this. -- Oksk 20:09, 5. Apr. 2009 (CEST)
Code 55 - false positives
At nl:Wikipedia:Wikiproject/Check Wikipedia#Code 055: Hellip there are 6 articles listed, but none of them have the '…' code in the article. It looks for some reason there are only false positives detected (or the problem description is incorrect). - Robotje 10:15, 6. Apr. 2009 (CEST)
- This is right, the error has change. Please change the description. No this error detect double small-Tags. Hellip was merge in error 11. -- sk
New error - thumbs with forced size
Hi Stefan, I have an idea for new error. Could your script record images with syntax thumb, thumbnail or frameless and have forced size by width in px? For example: [[File:Foo.jpg|thumb|250px|Foo]]. I don't know how other wikis but on cs.wiki it is not much supported because it forces user preferences, so syntax upright is used more likely. --Reaperman (cs) 11:43, 16. Apr. 2009 (CEST)
- I think this is possible. I will try this in the next week. -- sk 22:26, 16. Apr. 2009 (CEST)
- Thank you. --Reaperman (cs) 11:46, 21. Apr. 2009 (CEST)
Error 11 on enWiki
On enWiki, the & code text is sometimes used to help display another character's code as raw text rather than having it appear on the page in its unicoded form. For example, in en:Beta (letter), the code: &beta;
is used to add "&beta" to the page's display, rather than the unicode character β. Would it be possible for your script to not detect & text as an error, since it is used both this way and in external links quite a bit? It certainly is possible to use "code" or "nowiki" tags halfway through the text to prevent it from appearing as unicode, but I think that the other version is cleaner. Otherwise, I haven't seen any bugs in that error (although honestly it isn't a bug, just a common false positive). Thanks! -Drilnoth (Talk) 01:30, 19. Apr. 2009 (CEST)
- I agree with Drilnoth. This error gives a long list of articles on swedish wiki containing links to url:s containing
&
. Also, by the way, this error has an incorrect translation on sv-wiki. It should be "HTML namn" rather than "Fel antal hakparentser" which is the translation for error number 10 (copy-paste error?). /Sten André 15:53, 19. Apr. 2009 (CEST)
- I will fix this. -- sk 08:53, 20. Apr. 2009 (CEST)
- Thanks. -Drilnoth (Talk) 15:31, 20. Apr. 2009 (CEST)
- Actually, could you also try to deactivate < and >? They seem to have a number of false positives from people using them to escape MediaWiki's code conversions... for example, <em> produces <em> for use in examples. -Drilnoth (Talk) 14:29, 21. Apr. 2009 (CEST)
- Thanks. -Drilnoth (Talk) 15:31, 20. Apr. 2009 (CEST)
- I will fix this. -- sk 08:53, 20. Apr. 2009 (CEST)
- , I have deactivate Ok
& < and >
. -- sk 21:47, 21. Apr. 2009 (CEST)- Great, thanks! -Drilnoth (Talk) 03:15, 22. Apr. 2009 (CEST)
- , I have deactivate Ok
Summary table
In the markup of the table {{FULLPAGENAME}} might not be needed [3]. -- User:Docu
- Thanks for this info. I will test it. -- sk 08:50, 20. Apr. 2009 (CEST)
- . -- Oksk 21:49, 21. Apr. 2009 (CEST)
Template_with_Unicode_control_characters
I can't fix two of them:
1) Autonome Hui Prefectuur Changji |letterl=|Anders=}} 2) Avigdor Lieberman | nationaliteit = Israelisch| afbeelding = AvigdorLieberman.jpg| opvolging
1 = http://nl.wikipedia.org/wiki/Autonome_Hui_Prefectuur_Changji 2 = http://nl.wikipedia.org/wiki/Avigdor_Lieberman
They seem to contain already unicide (chinese) characters.
Cheers,
Rudolphous 21:04, 21. Apr. 2009 (CEST)
- Hello Rudolphous, look at my change. The control characters where in naam and oie. If you copy this to notepad++ then you see there at the end of the values an arrow. This is the problem and I have delete this now. -- sk 21:11, 21. Apr. 2009 (CEST)
- Oke, thanks! Rudolphous 22:11, 21. Apr. 2009 (CEST)
Code 017
Only after this edit the script noticed that the category 'Neolithicum' was double. So with
- [[Category:abc]]
- [[Category:Abc]]
no double category is found but with
- [[Category:Abc]]
- [[Category:Abc]]
the script detects a double category. Since the first letter in a namespace is always capitalized this shouldn't make any difference. It's a minor problem, but please take a look at it. - Robotje 08:26, 18. Apr. 2009 (CEST)
- Very interesting. I will fix this in the next week. Thank you for this info. -- sk 21:14, 18. Apr. 2009 (CEST)
- Check the same with error 64 (Link equal to linktext), the case of the first letter should be ignored. -- Laddo 66.131.214.76 18:35, 19. Apr. 2009 (CEST)
- , I have insert this for error 17 but not at error 64 because I think a link like OkxYZ is ok and not an error. Or can you give me a good example? -- sk 22:16, 21. Apr. 2009 (CEST)
- Writing
[[XYZ|xYZ]]
or[[xYZ]]
gives the same result : they both display xYZ and link to page XYZ -- the second (simpler) syntax should be used. --Laddo 66.131.214.76 05:27, 22. Apr. 2009 (CEST)
- Writing
Check 7 on en.wp
As per en:WT:WikiProject Check Wikipedia#Forcing a section update, would it be possible to scan the articles on tools:~sk/checkwiki/enwiki/enwiki_error_list_error_007.html to see how many remain to be fixed. Currently there are 3242. -- User:Docu
Just wondering for en:Wikipedia:WikiProject Check Wikipedia/old. Is it correct that the scan terminated after 40,000 errors (e.g. as per Laddo here)? If yes, how many articles were scanned or which percentage of the full dump? -- User:Docu
- The full dump will be scanned. But in the livescan after 40000 errors the script will end. But with the new system we have not so much errors at the livescan. For example in frwiki we know over 110000 errors. But my script scan only 8649 (max 50 errors per error). At them moment we have 75 errors x 50 = 3750 articles and then you must add the new articles and change articles. So we have between 8000 and 15000 scans per day (yesterday in enwiki 13869). -- sk 08:36, 22. Apr. 2009 (CEST)
- Then for check 43 in en.wp (for which the March 15 dump was scanned and the scanning wasn't improved since), the only errors we don't know about yet are the ones created in articles between March 15 and when you started livescan? - User:Docu (April 22, 2009)
error 56 (ascii art)
In pl:Aj there are used "->" inside <hiero> tags. It should't be noticed as error. Malarz pl 22:20, 6. Apr. 2009 (CEST)
- See the translation of this article in English and German, there is no "->" in this "hiero". I think we don't need this "->" inside the hiero, but I am not a "Ägyptologe". Please ask some who know this theme. -- sk 08:12, 7. Apr. 2009 (CEST)
- In German article there are images used to make outline of hiero taken from {{Hiero/Kartusche}}. On pl.wiki are used only WikiHiero extension. There are possibility to use
<-
and->
. I think, script have to not check chars inside <hiero> tag, the same as with <math> and <code> tags. Malarz pl 10:43, 7. Apr. 2009 (CEST)
- In German article there are images used to make outline of hiero taken from {{Hiero/Kartusche}}. On pl.wiki are used only WikiHiero extension. There are possibility to use
- Ok, I will change this. -- sk 15:35, 7. Apr. 2009 (CEST)
- Week gone and there are no change. Malarz pl 12:11, 14. Apr. 2009 (CEST)
- Sorry, it stand at my To-Do-List, but there stand many things. :-) Please change at the moment the Description for this error in plwiki. Inform the User about this problem. I will fix this in the future, but this is not easy. -- sk 13:37, 14. Apr. 2009 (CEST)
- , I include this in my script. -- Oksk 21:39, 22. Apr. 2009 (CEST)<
Arrow as ASCII art
Hello. Is it possible to make script to ignore these arrows inside of <hiero>...</hiero>. Cartouches are ussualy writen this way, for example
|
is writen as <hiero><-i-mn:n-m-HAt:t-></hiero>. --146.102.126.83 01:10, 16. Apr. 2009 (CEST)
- See here. -- sk 22:28, 16. Apr. 2009 (CEST)
- , I include this in my script. -- Oksk 21:38, 22. Apr. 2009 (CEST)
Error 016 huwiki
Hi! In ~90% of cases these characters are appended to image filenames. Could you tell, where these come from and how could be avoided? Bean49 15:58, 18. Apr. 2009 (CEST)
- Copy the template or image text in an editor, like Notepad++, there you can see this special letters and delete this. After this you can copy back this text in the wikipedia. -- sk 21:17, 18. Apr. 2009 (CEST)
- Thank you, but I can fix them. I asked, if you could tell something about the origin of the problem. Bean49 22:38, 18. Apr. 2009 (CEST)
- I found this problem and a other user too. See this discussion in the archiv. -- sk 22:10, 19. Apr. 2009 (CEST)
How these characters get inserted in the article? ~90% are appended to image filenames. Bean49 18:20, 22. Apr. 2009 (CEST)
Code 003
Hi Stefan, Can you add {{appendix}} to your script? Rudolphous 08:43, 22. Apr. 2009 (CEST)
Error 76
The en:Baker Clamp is fine, the URL is inside a cite and looks fine.151.151.73.166 20:41, 22. Apr. 2009 (CEST)
- Sorry, I don't find an English article with the name "Baker Clamp". Or did you mean en:Baker clamp? There my script found in this version the problem with the link. You need for an external link [[http...]]. This is wrong. Please only use [http...]. -- sk 21:13, 22. Apr. 2009 (CEST)
ISBN wrong position of X bug
The en:Japanese War Crimes article comes up with having X in the wrong place, however this is a bug, what it has is Barnaby, Wendy. The Plague Makers: The Secret World of Biological Warfare, Frog Ltd, 1999. ISBN 1-883319-85-4 ISBN 0-7567-5698-7 ISBN 0-8264-1258-0 ISBN 0-8264-1415-X and the last X is counted as being in positions 40,30 and 20 generating three errors, where it is actually just fine.151.151.98.236 14:59, 16. Apr. 2009 (CEST)
- I know this bug and will fix it in the next week. -- sk 22:25, 16. Apr. 2009 (CEST)
- , I have fix this bug. -- Oksk 21:21, 23. Apr. 2009 (CEST)
Debate over error 066
Discussions in English and in French wikis on error 66 (Image description with <small>) by contributors dissatisfied by interventions from Check Wikipedia team; the main issue is over descriptions that comprise two sizes of fonts. As one said, « A difference of opinion over style is not a syntax error ». Please preferably separate it in two detections:
- Descriptions entirely enclosed in <small>
- Descriptions partly enclosed in <small>
This way, different instructions and/or activation can be used for these distinct issues. Thanks --66.131.214.76 05:19, 20. Apr. 2009 (CEST)
- I will try tro seperate this error in two errors. -- sk 08:54, 20. Apr. 2009 (CEST)
- , see new error 077. -- Oksk 21:13, 23. Apr. 2009 (CEST)
Code 056: Pijl gemaakt met ASCII art
Your script seems to detect errors inside source tags, for example: <source lang="c">for (int i=0; i<=100; i++)</source>. Can you skip the contents between sourcetags when checking for this error? Rudolphous 12:10, 22. Apr. 2009 (CEST)
- My script detect source-tags and exclude this. I think there is a other problem in this article. Can you give me the name of the article, so that I can check this? -- sk 20:58, 22. Apr. 2009 (CEST)
- You are right. If I find an error then I will come back to this. Rudolphous 07:44, 24. Apr. 2009 (CEST)
Interwikilink vor der letzten Kategorie
Das ist reine Quelltextkosmetik, die Anordnung ist lediglich eine Empfehlung. Bitte nimm den "Fehler" aus dem Script. --Sommerkom 19:08, 23. Apr. 2009 (CEST)
- Diese Fehlerrubrik findet manchmal aber auch richtige Fehler, wenn z.B. ein Interwiki irgendwo im Text versteckt ist. In Anbetracht der Tatsache, dass nur sehr wenige Artikel dort angezeigt werden, ist das auch kein weiteres Problem. -- sk 20:47, 23. Apr. 2009 (CEST)
Doch das ist ein weiteres Problem, Du forderst nämlich effektiv Benutzer dazu auf, botartig Quelltextkosmetik zu betreiben, siehe z.B. Spezial:Beiträge/Exil. Die berufen sich dann auf Deine "Richtlinie", siehe Benutzer_Diskussion:Exil#Nochmal_.22fix.22. Ich hab keinen Nerv, für so einen Mist weiter von Pontius zu Pilatus zu rennen, damit die sich gegenseitig die Verantwortung für klar unerwünschte Edits zuschieben ("kann nix dafür, führe nur Befehle aus", "kann nix dafür, die verstehen alle meine Liste nicht richtig"). Stell das klar, den nächsten, der meine Beobachtungsliste mit sowas zuspammt, setze ich auf die VM. --Sommerkom 03:43, 24. Apr. 2009 (CEST)
- Stop! Bitte atme mal tief durch. Jemanden als Vandalen zu bezeichnen, weil er einige Artikel an die in der Wikipedia geltenden Regeln anpasst, indem er die Reihenfolge der Metadaten, Kategorien und Interwikis korrigiert ist ja wohl völlig überzogen. Wir haben mittlerweile fast 900000 Artikel und da müssen gewisse Standards auch durchgesetzt werden, sonst endet das im Chaos. Es wurde schon mehrfach festgestellt (Siehe Löschdiskussion) das Aktionen nicht gestoppt werden müssen nur weil sich jemand auf seiner Beobachtungsliste gestört fühlt. Was soll ich da sagen? Am Anfang gab es nur 50 Edits pro Tag. Ich komm jetzt auch nicht mehr hinterher bei den vielen Edits pro Tag. Also überleg dir mal die Verhältnismäßigkeit. Heute sind es genau 9 Artikel von fast 900000 die in der Rubrik "Interwikilink vor der letzten Kategorie" auftauchen. Das sind 0,001 Prozent aller Artikel, die hier gegen allgemein gültige Regeln verstoßen. -- sk 08:41, 24. Apr. 2009 (CEST)
- Es gibt keine allgemeingültige Regel, sondern nur eine Empfehlung. Sowas kann wer auch immer bei sinnvollen Verbesserungen gerne miterledigen, aber es rechtfertigt keine eigene Edits; das wurde zigfach ausdiskutiert. Wie schon gesagt, den nächsten Edit dieser Art melde ich auf der VM. --Sommerkom 08:49, 24. Apr. 2009 (CEST)
- Schau mal Hilfe:Internationalisierung, da steht "Interlanguage-Links sollten alphabetisch nach den Sprachcodes sortiert am Ende des Artikels eingefügt werden". Der Fettdruck steht genau so dort! -- sk 09:36, 24. Apr. 2009 (CEST)
- Schau mal Hilfe:Kategorien: Wegen besserer Übersichtlichkeit wird empfohlen... Welchen Nutzen solche Edits haben, kannst Du ja auf der VM erklären. --Sommerkom 11:23, 24. Apr. 2009 (CEST)
- Wenn 9 Artikel dem De-facto-Standard in der Wikipedia nicht entsprechen, willst du den Benutzer als Vandale sperren lassen, der diese neun Änderung durchführt? Findest du nicht, dass du hier mit Kanonen auf Spatzen zielst. Gerade bei der Einhaltung solch eines "De-facto-Standards" ist eine Vandalenmeldung völlig am Ziel vorbei geschossen. Du hast es schon richtig gemacht, indem du den Benutzer direkt angesprochen hast, aber hier mir einer VM zu drohen, finde ich persönlich eine totale Überreaktion. Deswegen mein Rat von vorhin, einfach mal tief Durchatmen. Lies mal Wikipedia:Wikistress -- sk 11:49, 24. Apr. 2009 (CEST)
Detect duplicate < references >
Hi Stefan, you might detect duplication of <references /> in articles (any form). -- Laddo 66.131.214.76 01:17, 14. Apr. 2009 (CEST)
- , now you find error 78. At the first timw only <references />. -- Oksk 22:12, 23. Apr. 2009 (CEST)
- Thanks! I guess such duplications will quite often involve two different forms, like <references /> and {{reflist}}. -- Laddo 199.22.61.2 13:55, 24. Apr. 2009 (CEST)
Error 058 - a little too sensitive
Hi, could you modify the script so that it doesn't find headlines like ===[[#A|A]] [[#B|B]] [[#C|C]]=== ...? They are sometimes used to create a letter index (example: [6]). Thank you! PG 213.134.178.180 00:23, 18. Apr. 2009 (CEST)
- Did your language not have a template for this? I think in all other languages there exist this as template. -- sk 21:02, 18. Apr. 2009 (CEST)
- pl:Szablon:Spis treści kategoria for categories; for articles, see en:Template:AlphanumericTOC (no pl version, apparently). --66.131.214.76 18:32, 19. Apr. 2009 (CEST)
- It seems that pl version does exist, but it doesn't have any interwiki links. Thanks, anyway. PG 213.134.178.180 19:30, 24. Apr. 2009 (CEST)
Code 078: Reference double
Hi Stefan, I don't get this error. What wrong with the articles on: http://nl.wikipedia.org/w/index.php?title=Wikipedia:Wikiproject/Check_Wikipedia&oldid=16548612#Reference_double? Rudolphous 07:49, 24. Apr. 2009 (CEST)
- This is a new error and the script don't work correct. Delete this error today, I will fix my script tonight. -- sk 08:33, 24. Apr. 2009 (CEST)
- , now it works. -- Oksk 22:06, 24. Apr. 2009 (CEST)
Unclear 046 detections
Hi, for some time, French detection reported these two occurrences of error 046:
fr:Squelette (oiseau) | ]] |
fr:Travail des enfants | ]] |
There are no details so no one could find what causes these detections. Any ideas ? Thanks -- Laddo 199.22.61.2 13:59, 24. Apr. 2009 (CEST)
For the first one it's two lines above "== Crâne ==".AWB (SVN version) is quite efficient to help fix them. -- User:Docu- Maybe it's not. I just noticed that there is a table in the image description. Try AWB. -- User:Docu
- I'm awaiting approval by an admin to use it... -- Laddo 199.22.57.2 18:42, 24. Apr. 2009 (CEST)
- Maybe it's not. I just noticed that there is a table in the image description. Try AWB. -- User:Docu
- In fr:Squelette (oiseau) is the problem in the section "Ceinture pelvienne". There is a table inside a table. My script has at the moment a problem to detect this. The same problem is in fr:Travail des enfants in section "Répartition géographique". I know this problem. I will try to fix this in the next future. -- sk 16:36, 24. Apr. 2009 (CEST)
- Noted. I will leave a note in the description for now. Thanks -- Laddo 199.22.57.2 18:42, 24. Apr. 2009 (CEST)
Identical references?
Could your script pick up the error of having two separate references with identical content on the same page? These should be using the "name=" syntax, so that the same reference text doesn't keep showing up multiple times. Thanks! -Drilnoth (Talk) 19:37, 24. Apr. 2009 (CEST)
- I think this is difficult, but I will try it. -- sk 22:08, 24. Apr. 2009 (CEST)
Dutch output
The last scan output of the Dutch wikipedia on the toolserver seems not correct. Suddenly from 20.000 errors to 40.000 errors. p.s. Yesterday all Ascii art errors of nlwiki were solved by the way. Rudolphous 08:26, 25. Apr. 2009 (CEST)
- Sorry, I have fix the script. -- sk 11:55, 25. Apr. 2009 (CEST)
- It was really a mess but it works correct now. Thanks for fixing. - Robotje 09:39, 26. Apr. 2009 (CEST)
Modification of error 068
Hi. Is it possible to exclude links to users on other projects, as these sometimes are used as bylines for photos? (An example of this can be found in the Norwegian article Penselsvin). --Helt 09:01, 25. Apr. 2009 (CEST)
- I think this is too difficult. But when this byline is allowed inside the article? Is bylines only a template in nowiki? -- sk 22:07, 26. Apr. 2009 (CEST)
ISBN errors
Is there any possibility to exclude some ISBN's from error list which has wrong checksum, but are printed on books with this error and are used in public libraries. Example: ISBN 83-900227-6-3 (you can check in Polish National Library)? Malarz pl 11:46, 27. Apr. 2009 (CEST)
- In the German Wikipedia we have a special template Vorlage:Falsche ISBN. I see there is the same in enwiki and svwiki. I think the best way is to create a template like this in plwiki and to use it with this wrong ISBN-numbers. I think it is not a good way to exclude this by hand in my script. -- sk 22:03, 27. Apr. 2009 (CEST)
Error 37 : Incomplete list
Hi Mr. Kuhn,
For the French Wikipedia, I know that error 37 should output around 1.500 items. However, the script listed only 15 items in all ([7]). The last item in the list is a kanji : 水.
Regards,
Cantons-de-l'Est, 14:04, 26. Apr. 2009 (CEST)
- French detection had a problem on April 26th that corrupted the lists of errors from the last full scan. All lists seem to be reset. I am not sure there is any way to restore them. -- Laddo 66.131.214.76 16:54, 26. Apr. 2009 (CEST)
- No, the list can not be restored. But they will grow every day. -- sk 21:18, 26. Apr. 2009 (CEST)
- Fair enough. these things happen with a Beta version. No worries ;) -- Laddo 66.131.214.76 04:06, 28. Apr. 2009 (CEST)
- At the moment I have no backup of the lists. But I will insert a backup procedure in the script. But I think a smaler list is not the problem, because every day the script found new articles with error. And so the list will grow very fast. :-) -- sk 10:02, 28. Apr. 2009 (CEST)
Html comments in WCW
Hi, is posible to skip endings of html commnets --> from error 056 (arrows)? Thanks a lot, —Jagro (cs.wiki) 16:52, 22. Apr. 2009 (CEST)
- Hello Jagro, my script exclude HTML-comments before it check for error 056. Can you give me an example of this error? I will check this with this example, but I think there is an other error. -- sk 21:06, 22. Apr. 2009 (CEST)
- Since then I haven't seen this mistake, so I think, that it was caused by comment and arrow in one site. I'm sorry, it looks like my mistake, not mistake of script. —JagroCZ (cs.wiki) 22:41, 28. Apr. 2009 (CEST)
Check 37 on en.wp
At en:Wikipedia:WikiProject Check Wikipedia#Category DEFAULTSORT missing for titles with special letters (partial AWB), I listed the various types of false positives.
Fixing one or the other type might reduce the number of false positives, but there are likely to be some that remain. Thus I added a list of articles at en:Wikipedia:WikiProject Check Wikipedia/skip 037. If you could skip these directly in the script, this would simplify things. -- User:Docu
- Each wiki would end up with hundreds of exceptions like these. It would be simpler to restrict the detection to roman accented characters -- see #Error 037 : exclude chinese and japanese characters on this page. -- Laddo 66.131.214.76 16:50, 26. Apr. 2009 (CEST)
- My suggestion was that the script would read the live list on en.wp and skip these items. It could be done with AWB, but, if we would want to fix more than the first 50 items, we would need to list items that were fixed too. An alternate solution would be to change mediawiki .. either by fixing the sorting in categories or by adding automatically a default sortkey. -- User:Docu (April 28, 2009)
- In my opinion, it's simpler and safer to use rules, rather than lists. I believe that your main problem is that {{lifetime}} is not recognised by the script as a substitute for DEFAULTSORT. As for fixing MediaWiki, good luck ! It's been open since 2004... -- Laddo 199.22.61.2 14:10, 29. Apr. 2009 (CEST)
- I will try this at the weekend. -- sk 22:23, 27. Apr. 2009 (CEST)
CheckWikipedia: HTML-Sonderzeichen in Infoboxen
Cäsium137 hat mich darum gebeten, kein Ersatz Entity -> HTML in Infoboxen durchzuführen. Da kommt es sonst zu Bot-Problemen. Das betrifft wohl u.a. Astronomie-Artikel. Bitte denke mal darüber nach, Infoboxen für diesen Fall in der CheckWikipedia-Liste zu ignorieren.-- Hhdw1 07:37, 29. Apr. 2009 (CEST)
Request: leading and trailing spaces in a link
Maybe you can add also an error [[link[extra space]]] and [[[extra space]]link]]. Rudolphous 21:22, 29. Apr. 2009 (CEST)
- Interessting idea. I have this in an other script, but I can check this also in Check Wikipedia. -- sk 10:23, 30. Apr. 2009 (CEST)
error 34
I saw in few articles <noinclude> tak. I think, that sub error_034_template_programming_elements should discover this tag. Malarz pl 19:30, 26. Apr. 2009 (CEST)
- Please give me an example. -- sk 21:17, 26. Apr. 2009 (CEST)
- pl:Ekstraklasa polska w piłce siatkowej mężczyzn, pl:Gabriel Radomir - here are many fields indicating error 34. <noinclude> was inserted to article by substing the template. Malarz pl 11:38, 27. Apr. 2009 (CEST)
- The first point is: this articles don't use
<noinclude>text</noinclude>
. They only use<noinclude>
. But I think the bigger problem of this article is this bad infobox. Why did you not programm a normal template? All with {{{ }}} has nothing to do in the namespace of articles. This are elements for templates and should only used in the template-namespace. -- sk 22:22, 27. Apr. 2009 (CEST)
- The first point is: this articles don't use
- Better example: [8]. You can make new test for detecing <noinclude> without </noinclude>. Article in previous exaple is waiting for discusion end about new infobox template. Malarz pl 12:30, 1. Mai 2009 (CEST)
- In en.wp there are few cases where noinclude or includeonly is used to generate larger lists, e.g. en:List of casinos in Oklahoma is transcluded into en:List of casinos in the United States. While I'm not entirely convinced that this is a good idea, just to say that it's being done. Anyways, en:Wikipedia:WikiProject_Check_Wikipedia#Template_programming_element is already quite exhausting. -- User:Docu (April 28, 2009)
Error 78
Hello Mr. Kühn,
This error reports a page where there are many <references.... It can serve to group notes :
- <ref group="note">...</ref>
- <ref>...</ref>
- <ref group="note">...</ref>
- <ref>...</ref>
These notes can be listed like this :
- <references/>
- <references group="note"/>
The 'cite.php' extension allows this feature.
For instance, in Procédé Haber, three sections use this feature : Références, Notes, and Traductions de.
Regards,
Cantons-de-l'Est, 20:14, 27. Apr. 2009 (CEST)
- This is a new error and I see this problem too. I will try to fix this problem at the weekend. -- sk 22:07, 27. Apr. 2009 (CEST)
- . I have change the error 78. -- Oksk 22:18, 2. Mai 2009 (CEST)
Code 055: Nested small tags
Hi Stephan, do you know a good alternative for nested small tags? The following page [9] renders differ if small is 1 x removed. Rudolphous 21:15, 29. Apr. 2009 (CEST)
- I think nested small tags are never a good thinks, but at the moment I have no idea. But you can fix this with span tags and a different style. -- sk 21:03, 3. Mai 2009 (CEST)
Erweiterung Fehler 10 - Anzahl schließender eckiger Klammern nicht korrekt
Hallo! Heute nerve ich mal auf der richtigen Seite. Mir kommen immer wieder Fehler in Einzelnachweisen unter. Es fehlen oft die öffnenden oder schließenden Klammern 3. Einzelnachweis. Kann man solche Fehler innerhalb eines Einzelnachweises nicht erkennen? Allerdings müsste vielleicht auch geprüft werden, ob es viele Fälle gibt wo so etwas gewünscht ist, was ich mir aber im Moment nicht vorstellen kann.
Daneben gibt es einen Darstellungsfehler in der Wikipedia Software bei der Anzeige von Weblinks (auch in Einzelnachweisen). Die Ausgabe ist fehlerhaft und der Link funktioniert auch nur in der ersten Zeile, wenn innerhalb der Linkbeschreibung ein Zeilenumbruch enthalten ist. Kann man mit Perl Weblinks suchen die mind. ein Sonderzeichen zum Zeilenumbruch enthalten?
- Beispiel
Korrekt:AAAAA BBBBB Fehler:[http://www.iaea.org/PRIS AAAAA
BBBBB]
Gruß --Video2005 17:49, 24. Apr. 2009 (CEST)
- Interessante Fehler, ich schau mal was ich da machen kann. Kann aber etwas dauern. -- sk 22:08, 24. Apr. 2009 (CEST)
- Keine Eile, wir haben ja noch genug mit den vorhandenen Fehlern zu tun... 8-) --00:38, 25. Apr. 2009 (CEST) (Nachtrag fehlende Signatur: Video2005 19:24, 25. Apr. 2009 (CEST))
- Externe Links mit Zeilenumbruch habe ich jetzt als neuen Fehler eingebaut. -- sk 21:40, 7. Mai 2009 (CEST)
- SUPER! Da bin ich ja mal gespannt was da alles so ans Tageslicht kommt. -- Video2005 00:25, 8. Mai 2009 (CEST)
A new error
Hello Mr. Kühn,
The error 75 displays :*
error. I forgot to ask for :#
.
Cantons-de-l'Est, 11:52, 28. Apr. 2009 (CEST)
- Also, I saw
:-
.
Cantons-de-l'Est, 22:19, 29. Apr. 2009 (CEST)
- Interesting! I will try it. -- sk 10:24, 30. Apr. 2009 (CEST)
- , I have change error 75. -- Oksk 20:45, 3. Mai 2009 (CEST)
- New bullets:
- · lorem ipsum
- · bolo bolo
- - lorem ipsum
- - bolo bolo
- Regards, Cantons-de-l'Est, 13:02, 7. Mai 2009 (CEST)
Links to other wikiprojects
Hello, I would ask you for new error similar to error 068 but it wouldn't find links to other languages but links to other wikiprojects like Wiktionary ([[:wikt:link]], Wikibooks ([[b:Link]]) etc. Thanx in advance. --Reaperman (cs) 11:55, 21. Apr. 2009 (CEST)
- , I have insert error 082. -- Oksk 22:20, 22. Mai 2009 (CEST)
Missing start brackets - Sudan
The bot apparently got confused on one of the images due to something ending (apparently correctly) in ]]}}]]151.151.7.54 15:21, 8. Apr. 2009 (CEST)
- The missing ones were a few lines above the ones displayed -- User:Docu
- ok -- sk 15:23, 23. Mai 2009 (CEST)
Check 43 in English Wikipedia (Template not correct end)
en:Gears of War appears to be a false positive. -- User:Docu
- Maybe two problem. See my change. -- sk 10:22, 29. Apr. 2009 (CEST)
- Good point. Thanks. -- User:Docu
- ok -- sk 15:23, 23. Mai 2009 (CEST)