Spam is a collective nuisance. The techniques spammers adopt aren't adopted because they want to deface a particular page, a particular journal, or a particular website; it is a firehose tactic where they attempt to deface millions of pages because they know only 1% will get through, and every one that does get through gives them more benefit, and every additional instance that does get through increases that benefit.
'Attractiveness' in this case, despite how zvi used it, has nothing to do with a particular aesthetic -- "spam on my journal makes my journal ugly" -- but the overall success rate of spam attempts on a service. If a spammer attempts one million spam comments on Website X and only one gets through, Website X will be less attractive to the spammer than Website Y, where five hundred thousand of their spam comments get through. The success rate on Website Y means that more of the spammer's attention will be devoted to it.
For real-world examples, look at wikis out there without any sort of spam deterrant; those that revert vandalism quickly and block spambots are not targeted at anywhere near the same rate as those that don't. (Not just in the sense of "there is less overall spam because it is being removed", but in the sense of "there are fewer spamming attempts made against the wiki".) Each individual act of spam is the vangard for a thousand zombie botnets waiting to spew filth.
I don't know if you ever look at LiveJournal's latest posts feed, but a month ago, you couldn't load that page without 85% (conservatively) of posts being spam. LiveJournal now suspends around 30,000 spambot accounts per day, after some recent changes. The spambots are evolving; if a site like DW were to say "okay, what if OpenID accounts could post links in comments and have them linked normally," the next step could very likely be for those botnet networks to create the accounts on LJ, where there is little obstacle to account creation (reCAPTCHA has not only been cracked, it's common to have "CAPTCHA forwarding" where the botnet farms out the human tests to humans who are paid pennies for every CAPTCHA solved), and then rather than use those botnet-controlled accounts on LJ, where spam activity could be detected by the spamtraps now in place, use them as OpenID accounts on other services. It's already happening, quite frequently, because most sites don't cooperate with each other to detect and block spam cross-network.
no subject
'Attractiveness' in this case, despite how
For real-world examples, look at wikis out there without any sort of spam deterrant; those that revert vandalism quickly and block spambots are not targeted at anywhere near the same rate as those that don't. (Not just in the sense of "there is less overall spam because it is being removed", but in the sense of "there are fewer spamming attempts made against the wiki".) Each individual act of spam is the vangard for a thousand zombie botnets waiting to spew filth.
I don't know if you ever look at LiveJournal's latest posts feed, but a month ago, you couldn't load that page without 85% (conservatively) of posts being spam. LiveJournal now suspends around 30,000 spambot accounts per day, after some recent changes. The spambots are evolving; if a site like DW were to say "okay, what if OpenID accounts could post links in comments and have them linked normally," the next step could very likely be for those botnet networks to create the accounts on LJ, where there is little obstacle to account creation (reCAPTCHA has not only been cracked, it's common to have "CAPTCHA forwarding" where the botnet farms out the human tests to humans who are paid pennies for every CAPTCHA solved), and then rather than use those botnet-controlled accounts on LJ, where spam activity could be detected by the spamtraps now in place, use them as OpenID accounts on other services. It's already happening, quite frequently, because most sites don't cooperate with each other to detect and block spam cross-network.