Board Thread:Code Review/@comment-168424-20151003020735/@comment-24473195-20151006224945

Cqm wrote: @Dessamator  in this context is completely parsed and sanitised by the MediaWiki parser which is more or less bullet proof in terms of security. I haven't seen any security patches to it in recent months in any of the releases despite an external security review and specialised security team at Wikimedia. Anything that was major enough would likely be backported anyway even though it's for different version. I'm not sure how many changes have been made to it since 1.19 was released, but given how archaic it still is and recent work on Parsoid I wouldn't be surprised if it's more or less the same as the current version of core.

There's much point in checking the result either as due to how it's been requested it'll just throw an error. It's a slightly dirty way of obtaining the HTML of a page, but it's fairly common and pretty well tested nonetheless. That may be true, but it is better to be safe than sorry. According to MediaWiki's security guidelines :

"Any content that MediaWiki generates can be a vector for XSS attacks."

Although not a problem for wikia since all pages are basically text, wikimedia's newer version has different content models for pages, and different rules apply to them.

Anyway, I do agree that it is a small problem, and once we remove the tooltip code and cookie tracker stuff from the script, it is probably good enough to store here.