Hacking And Philosophy: Crackdown Part III

h&pcrackdown3

“Law and Order” may be my favorite chapter of Hacker Crackdown: it covers the perspective of the early 90’s seizures and arrests from the perspective of law enforcement. While the chapter has its flaws, I highly recommend it; [Sterling] treats both sides with patience and understanding, revealing how similarly adrift everyone was (and to some extent, remains) in the uncertainty of cyberspace. I also recommend the [Gail Thackeray] / [Dead Addict] joint talk from DEFCON 20 as an accompanying piece to this chapter, as it bridges the twenty-year gap between Crackdown‘s publication and today—and [Thackeray] herself is the focus of this chapter.

As always, everyone is welcome in our weekly discussion, even if you haven’t been keeping up with our progress through Hacker Crackdown. You can download it for free as an audiobook, too! Onward for more!

0. From Last Week’s Discussion:

RE: Q1 (Has the legal system’s definition of digital theft changed?) 

[dynamodan] pointed to Streamripper as a unique example of something that’s protected by law. Wikipedia suggests that the recording of internet radio may be covered under both fair use and the Audio Home Recording Act of 1992. He also noted that some companies have abandoned rigid copyright enforcement in favor of intelligent marketing strategies that encourage consumers to purchase rather than pirate.

It’s not all progress, however. [dan] offered a more critical perspective, emphasizing that those who break the rules in the digital world often receive disproportionately severe punishments. I think this is the result of both the disconnect between the technology and those making (and enforcing) the laws, and that “digital theft” is still a relatively recent concept. As we’ll see in this week’s chapter, no one quite knows how to deal with law or order.

RE: Q2 (Should sensitive documents such as E911 have been open source?)

[Deeply Shrouded & Quiet] lamented the spirit and mentality of the 80-90’s hacking era, and reiterated that most acted out of curiosity rather than malice. [dynamodan] challenged the innocence [Sterling] ascribes to hackers like [Jobs] and [Woz], explaining that the author blurs the line between malice and curiosity. It would take a wealth of ignorance to absolve them of responsibility for their actions.

[dan] suggested all products of the federal government should be open source for the public, considering tax dollars funded them. I suspect there’s more discussion to be heard about this topic. I encourage people to keep responding!

RE: Q3 (What’s happened to hackers and “bragging” over the years?)

[dynamodan] considered that bragging may be the result of the hackers’ inability to realize how “real” cyberspace was; they bragged because they saw no consequences from this new frontier. Perhaps the issue is more closely related to the age group of the hackers involved, as [dan] observed the bragging continues even today, though he’s probably also right that the hackers from Crackdown are tracked down by old-fashioned police work rather than coming across some hapless braggart.

I. What’s important for this week’s discussion?

This week I’m keeping my response short.

If you forget everything else about this chapter, remember this paragraph:

…American society is currently in a state approaching permanent technological revolution. In the world of computers particularly, it is practically impossible to ever stop being a ‘pioneer,’ unless you either drop dead or deliberately jump off the bus. The scene has never slowed down enough to become well institutionalized. And after twenty, thirty, forty years, the ‘computer revolution’ continues to spread, to permeate new corners of society. Anything that really works is already obsolete. [1]

We’ve seen staggering changes in our world over the past half-century, and Moore’s Law keeps on truckin—for now. [Sterling] explains that this ever-changing landscape can only be addressed by dynamic groups like the FCIC, (does it even exist anymore?) which lack traditional hierarchies in favor of “ad-hocracy.” We’ve certainly seen other shifts to ad-hoc, such as any example of crowdsourcing. Kickstarter is basically ad-hoc R&D funding.

Perhaps not as important an excerpt (but just as interesting) is [Sterling’s] advice to young readers:

In my opinion, any teenager enthralled by computers, fascinated by the ins and outs of computer security, and attracted by the lure of specialized forms of knowledge and power, would do well to forget all about hacking and set his (or her) sights on becoming a fed. Feds can trump hackers at almost every single thing hackers do, including gathering intelligence, undercover disguise, trashing, phone-tapping, building dossiers, networking, and infiltrating computer systems—criminal computer systems. Secret Service agents know more about phreaking, coding, and carding than most phreaks can find out in years, and when it comes to viruses, break-ins, software bombs, and Trojan horses, feds have direct access to red-hot confidential information that is only vague rumor in the underground.[2]

Here I feel [Sterling] is doing a great disservice to the concept of hacking: a concept he seemed intent on defending in this book’s introduction. Embracing hacking as a concept shouldn’t necessitate performing illegal activities, and [Sterling] should know better. Instead, I find this section an attempt to appeal to a teenager’s desire for power and forbidden knowledge by suggesting a career in law enforcement. While I encourage computer-literate, intelligent minds to seek jobs in law enforcement, [Sterling’s] missing his audience here and simultaneously dissociating himself from the hacker community. Though he says throughout this chapter he isn’t a hacker, he’s been so careful to represent them respectfully until now that these comments come across as a “dad knows best”-infused warning: exactly the assertion of authority hackers want to react against.

II. Questions for this week

1. Check out page 160: “A typical hacker raid goes something like this…” Have any of our readers been raided? No need to fill us in on specifics, but I’m curious if anyone was the victim of “gray-area” legal trouble, where the letter of the law was severely outdated given the circumstances, etc. Was the Fed’s perspective from this chapter what you would have expected?

2. What’s your take on [Sterling’s] advice to young readers? Clearly he did not anticipate the thriving alternative business model of pentesting, but is his advice still sound? Are my criticisms too harsh?

3. Earlier in the chapter [Sterling] quotes [Thackeray’s] anticipation that hackers will soon be responsible for killing people.[3] [Barnaby Jack] demonstrated the possibility with pacemakers last year, leading many to speculate about his death prior to Black Hat. By no means is this Wikipedia timeline an exhaustive list of all hacks, but are there any instances of hacking directly leading to death?

NEXT WEEK:

Read the final chapter of Hacker Crackdown, “The Civil Libertarians”


NOTES:

[1] Bruce Sterling, The Hacker Crackdown, (New York: Bantam Books, 1992), 193.

[2] Ibid, 217.

[3] Ibid, 185.


Hacking & Philosophy is an ongoing column with several sections:

21 thoughts on “Hacking And Philosophy: Crackdown Part III

  1. HaD Staff… Notice ^^^^^^^^ and vvvvvvvv… See how there are no comments? I don’t think anyone is reading this and/or no one cares about philosophy. MORE HACKS PLEASE.

    Thank you.

    1. It’s mostly because articles like this give people little to complain or flame about. Since you have managed to prove this idea wrong by trailblazing the way, I am sure a sea of comments are incoming.

    2. I’ll disagree… This column has been very interesting, especially as a hacker during this time period. This is basically the time period where hackers go from “evil” (in the media’s eyes) to the innovators of today. The government gets egg on their face by trying to persecute these hackers, instead of embracing them.

      BTW, there is alot of myths surrounding the E911 document. It was a file found a on private packet switched network run by one of the telco’s with defaults on the login, if I remember right.

      The information in it, and the related BSPs, had *always* be available for purchase by the public. This gets brought up during the trial by Scott Ellentuch. Only a few of their documents where actually restricted.

    3. Josh put up a 1000 word post. It’s part three of a four part series. There are references and bibliography. It’s also a great post. There isn’t a problem with the post, leaving one place where the problem lies.

      ANYWAY, here’s some discussion: I’m sure a few of the readers here had a ‘Free Kevin’ bumper sticker back in the day. If Mitnick happened today, we’d all be saying, “duh, of course he got caught. Shouldn’t have broken the law.” Discuss.

      Also, the holy roman empire was neither holy, roman, nor an empire. talk amongst yourselves.

  2. “Have any of our readers been raided?”

    Well I haven’t been raided, but when I worked at BlueCross I found a critical security flaw in their Altiris systems management product which proved they didnt know how to properly implement encryption, nor did they audit their own code:

    http://www.securityfocus.com/bid/37953

    Before that I had also discovered a SQL injection vulnerability in their Altiris knowledge base website which I reported without incident. When I informed them of this encryption vulnerability and followed all their reporting policies, they sent their legal team after me. My bosses boss (Director of IT) received a cease and desist from their lawyers. I was chewed out by him and my PC was wiped, although ironically all my emails, including ones which contained portions of their source code were not deleted. I was never told officially how this matter was settled, but shortly afterwards BlueCross decided to procure their data loss prevention and endpoint encryption products. Coincidence? Well you can be the judge on that.

    1. That really is a terrifying reality, that revealing a flaw in security can potentially land you in jail. It seems like this is the perfect place to draw up appropriate legislation offering protection to the right people. It’s a shame that there’s an overwhelming fear of “what else might the person know?” and a desire to punish rather than admit the error existed in the first place and deal with it appropriately.

      Glad you managed to get out of a bad situation.

      1. Not only that, but that the countries largest insurance company (technically I worked for the Association which is more of a trade association and lobbyist group for the actual plans) was willing to implement a closed source encryption solution from a company they know for a fact didnt know how to implement encryption nor did they audit their enterprise class software products.

        The security flaw was very obvious and I stumbled upon it by accident when trying to figure out how the asset management solution handled duplicate asset tags and serial numbers. I saw it was encrypting data, drilled down in .NET Reflector to their encryption function and saw it was encrypting sensitive data with a string containing a error message about the monitors resolution (who knows why). Their Altiris product also routinely violated best practices such as requiring domain administrator or root level privileges when it didnt need them. Hacking this one system would ensure virtually every computer (client & server) was rooted.

        It is also troubling that Symantec would threaten to sue someone (presumably for violating a EULA) for reverse engineering when they themselves engage in the same practice.

        And sadly I never truly got out of it, about a year later I asked to switch teams because of personal issues. My boss told me he noticed I was stressed out and told me to take the rest of the day off. When I came back the next business day my keycard didnt work and he refused to respond to my emails. A rather shitty way to let someone know they’ve been fired.

        1. I have this theory about something that I have seen time and time again in the Security/BBS/Builder etc. subcultures. There seems to be two distinct types of people in these scenes. Those which are happy to sit and shake their fingers at the people who are ‘breaking the rules’, but are strangely happy to abuse their power. And those which don’t see knowledge as forbidden who may have significant power but don’t use it.

          The first time I saw this was in the BBS scene where Sysops who were always going on about hackers/crackers and pirated software were quite happy to lie and cheat to get what they wanted. I found it perplexing that the hackers/crackers seemed to be more honest than those which didn’t covert secret or forbidden information. I am obviously generalising but as one of the ‘bad guys’ I always found it strange that many people who I would have assumed to be one of the `good guys’ were more corrupt, sexually or emotionally abusive to others or ended up being more criminal than the majority of the ‘bad guys’ that I knew at the time. Somehow the `good guys` found things like toll fraud, piracy and exploits abhorrent but breaking promises, stealing ideas or taking advantage of their social standing was completely acceptable.

          Matt I think that you probably ran foul of a Symantic VP or Sales team which were likely outraged that you endangered `their` reputation but had no issues in truly thinking that it was morally right to target you with ‘the lawyers’, I would wager that these same people have no problems overselling and under-delivering their products.

          Perhaps it comes back to mentor and the simple comment ‘yet we are the criminals’.

  3. Worked at insurance company for a short period of time. Several times I had to remove viruses from machines that had access to scanned premium payment checks which were stored on a share. It was pretty scary to think that all of that account information and scanned signatures were sitting, unencrypted on a share, and the that share was being accessed by compromised machines.

  4. RE: Q2 (Should sensitive documents such as E911 have been open source?)
    I’m a fan of open source & free as in freedom. I think there’s almost always value to open sourcing anything. Sometimes there are risks as well, but generally the benefits outweigh the risks. Most of the time, these benefits aren’t immediately obvious. Computers, GPS, cellphones, and the internet were originally conceived as military technology, but now they are the basis of our civilization. I believe any government funded technology that can be responsibly open-sourced, should be.
    In the case of E911, I’d have to disagree with Dan’s suggestion that “Sadly, there is no reason to open source emergency systems.” Enhanced 911 is available across North America , and the EU uses 112. Why not let developing nations copy and adapt our system? The roads and infrastructure in places like Africa is entirely different, and they understand it better than we ever will, so we couldn’t feasibly adapt our system for them if we wanted to. Maybe large companies would adopt the structure for emergency management. The company I work at has a 4-digit internal phone number for security, and we are encouraged to report incidents no matter how small.
    The downside is that it would be much easier to find security holes. Although this is why Linux has evolved to become so secure, government bureaucracy isn’t exactly known for its quick response time, so any holes might go uncorrected for quite some time. But even the worst bureaucracies can respond with police, ambulances, and firefighters when needed. Because of necessity, these people are uniquely free of red tape, and can do what needs to be done without a dozen layers of approval. A few software engineers with similar freedom can respond just as quickly.
    A tremendous amount of effort around the world goes into reinventing the wheel, but open source lets us spend our time forking, improving, and expanding our collective software and hardware.

    1. As counter-intuitive as it may seem to offer up the code / systems info for something as crucial as emergency services, I think you’re right that the Linux “bazaar” model (as in Raymond’s “Cathedral and the Bazaar”) will ultimately lead to better security and stability. I’m not quite sure how the government would even begin to open up this kind of infrastructure to basically volunteer crowdsourcing.

      Similarly, my initial reaction is to extend the same logic to the shadiness of the NSA. They’d probably be more likely to have the consent of the people if the people were collaborating in development, but I’m afraid that the end result (the community authoring software to spy on itself) is even more panoptic than the current model, even if they are choosing their own criteria, limitations, etc.

    2. >The downside is that it would be much easier to find security holes[1]. Although this is why Linux has evolved to become so secure, government bureaucracy isn’t exactly known for its quick response time [2], so any holes might go uncorrected for quite some time[3]. But even the worst bureaucracies can respond with police, ambulances, and firefighters when needed[4]. Because of necessity, these people are uniquely free of red tape, and can do what needs to be done without a dozen layers of approval. A few software engineers with similar freedom can respond just as quickly.

      the place I was going was,
      statement 1, true, much easier to find holes
      statements 2 and 3, any holes are likely to be on a we really should want to do something about that list.

      but that’s kind of where statement 4 falls over.

      if you open source the emergency system , and bugs are found and bugs are exploited, crashing the systems, then how are the systems going to work, how are even the worst bureaucracies going to get police of ambulances out the door when the emergency phones don’t work, or the call logging software doesn’t work, or the dispatch software doesn’t work, or the systems to send addresses and jobs through to the computers in the vehicles doesn’t work, when the GPS systems that they use with the satellite navigation doesn’t work?

      in acknowledging points 1,2,3 you surely must see how point 4 is in danger?

      (as disclosure, my mrs works for the ambulance service, (and used to work in the call centres) their systems are far from perfect, the amount of red tape that they have to deal with in their jobs is ridiculous, and the systems that they use for dispatching etc are usually described as flaky at best.)

      security through obscurity sucks, and if it were the only thing that a company (supplying the systems) was doing then they should be shot. but as a first line of defence, when code has already been audited etc, then it’s not terrible…

      In any case the service IS special, (in that uptime must be ensured) and vast resources (read money) may be given to achieve this, but if anything assured uptime if anything ADDs more red tape rather than taking it away.

      In systems where “if it breaks” then “no worries” the systems is down for a day whilst you restore then the OSS bazaar might work.
      but in systems where uptime must be guaranteed even installing a simple patch becomes a red tape nightmare, doing paper work of change requests, listing justifications for installing specific patches, testing the patches, rehearsing installations, making sure backups are taken, that warm standys are available, listing risks, immediate and long term, small risks and critical risks, what your back out plan is, how that’s been rehearsed. when and how the decision to back out a change will be invoked.

      I hate to come across as some kind of know it all, (even though I do it a lot) but the idea that critical systems can be patched little and often with basically no red tape is just a dream. -a dream that I wish was true since I work with a lot a critical uptime must be assured with financial penalties type systems and really wish that a lot of the red tape would disappear, but it is there for a reason, and that’s unlikely to change.

      the first chapter of the book describes how only half of all switch nodes went down because they were doing a staggered upgrade, if you’re rushing to install software because you;re trying to patch bugs being exploited in the wild, possibly introducing more bugs, then things have the possibility to go downhill and fail very quickly.

      when you’re talking about the emergency response systems that’s just not acceptable…

      I like the idea that we give away technology to developing nations, but I can’t see that it’s ever be done in such a way that source was given, compiled programs maybe?

        1. I watched the section that you suggested. I’m not sure I understand how we are talking about open source.

          Quinn seems to talk of open source infrastructure as if it’s a community effort. Can a community run successful utilities? (Could they raise poles for cables etc).

          Whereas I mean open source to mean access to plans and designs, detailed information.

          And the knowledge to understand that system.

          1. An open source project doesn’t work unless some community is developing and maintaining it. Open access is definitely a prerequisite.

            As a contrast, (and it’s probably not the best analogy, but…), I think one could argue that say, political affairs (and for this example let’s say local, small-town politics) are “open source” in that the community is allowed to attend and sometimes participate in meetings, can vote/become elected, records are available, etc. I don’t think that quite fits what I would consider “open source,” however, because it’s institutionalized. That’s not to say all open source projects have to be some whimsical, ad-hoc free-for-all, but I think it does require that it’s ultimately decentralized, which by definition implies the return of power to the overall community rather than any particular established entity.

            Specific to our conversation, though, whats your take on open source and/or which entities are the developers?

          2. what I mean is.

            Quinn appears to talk about open source infrastructure as if it’s a local community effort, can a local community put pylons up, can a local community manage a substation or distribution of goods and services. (can a community RUN infrastructure, not design infrastructure)

            in context the question is whether the design of the 911 emergency call handling can be open sourced, we’re not expecting a community to run the telephone service, but asking if a community can help design switching infrastructure etc to improve call handling.

            Yes, open sourcing the hardware and software of the handling process will allow people to improve it, it will allow people to take it and roll their own versions for campus securities etc.

            but fundamentally, the open source infrastructure that Quinn talks about in the video, or the way in which is is approached in the video seems vastly different from talking about leaked design and procedure documents showing design and asking if they should be open source anyway.

            It’s easy for communities to design infrastructure, a good example is a train line, you say in the mornings there are lots of commuters, so you put trains on, midday not much – so reduce services to save costs, then ramp up services to bring commuters home…

            deciding where to put the tracks for the train is not such an easy option when you ask the community nobody wants a train where there bedroom used to be. that’s where open sourced communities would fail, where you rely on the good will of people and vested interest get in the way.

            the idea of the wisdom of crowds (which is what open source hacking really is) is a great one right up to the point that the crowd gets small and self concerned, then it falls appart

      1. First, apologies if I’m a bit argumentative. I also pick apart new mathematical concepts, when they run counter to my intuition :) . Science comes from rigorous testing, and knowledge comes from debate. You appear to have a much better understanding of the E911 system than I do. I initially thought we could simply release the source code and reintegrate improvements back in. It looks like the entire system would be affected. Perhaps a gradual transition would be possible; releasing small portions of the source code at a time, and giving sysadmins the freedom to act immediately and fill out paperwork to keep their changes rather than to make them.
        Ideas like Ushahidi do something even better, though: they supplement existing structures and methods, and do things they can’t. The bazaar and the cathedral supplement each other’s week points as much as they are at odds with each-other. (props to Josh Marsh for the link). Let’s crowd-fund a local fire department before we try anything larger. Let’s build hacker-spaces on par with any lab before we demand that scientific journals publish their findings for free online. Some of these ideas will work and some won’t. But before we dismiss open-source E911 offhand, lets test it. Let’s start with the smallest tests we can, (maybe a college campus security system), and then work up to larger and larger systems. It’s better to try and fail than never to have tried. Who knows? Maybe we’ll achieve RepRap’s “wealth without money”, FSF’s dream of information freedom, and that whole participatory democracy & government transparency thing. For me at least, THAT is a utopia worth building.

        1. I guess that the answer to the question of we don’t know if it can work is a kind of emphatic yes it can.

          You follow the Debian method of dedicated package maintainers, anything committed to the tree has to be trusted by a trusted approver, (whom is someone who’s earned their spurs with previous work and commits etc)

          That’s great, Debian is arguably one of the most stable distributions because all code is reviewed and tested several times before it’s included at the tree, then it does to testing, then unstable, finally stable releases…

          It all goes really well.., until it doesn’t.

          And when it doesn’t you’re looking at problems where insecure code runs on production systems, you can run the unstable release because it’s unstable, you don’t want to run the stable release because it’s insecure…

          In top of that even with there being a group of hand picked maintainers who needs to approve each line of code, it’s not unknown for keys to be stolen and bugs actually introduced to the code base.

          In top of that, you’re expecting all maintainers to be friendly, but what about foes, what about attacks on a nation in the style of stuxnet? -a bug engineered to attack a known system.

          I wish that everything could be open source without fear of “bad hackers” I’m a huge believer in sharing skills and sharing knowledge. I just don’t see that the primal fear of bad people doing nasty things to the emergency systems that my friends family and I may need to rely on, in a life or death situation, can be appeased.

          I.e I have no doubts that open sourcing can improve the system, a million eyes are better than one, but there are political pressures that prevent it from happening. (I.e even people who know it’d improve the system are too scared of the what ifs for it to happen)

  5. If you take any of Sterling’s thoughts to heart, then you need to take articles like this seriously. I’m new to the professional hacking scene, but I’ve been an analyst pretty much my whole life. It’s the Wild West out there. Some unknown guy from across the globe is going to roll in tomorrow and shoot up the “town” with his new take on some custom code. “He” could be a teenager, a girl (whoa!), a group of hacktivists in a basement, a loose cell organization that has almost no physical contact, or a concerted effort by a nation-state. They could cripple you, your neighborhood, or even a nation itself. They can pull a black project out into the light, or cloak a criminal endeavor into the murky depths of a traceless commodity network.

    In short, anything is possible.

    The field seems to be growing continuously. The technology is certainly outpacing the legislation. Listening to the US government, the stance seems to be “Do No Harm”, but their focus is on not harming those that hold the consolidated purse strings. Very little seems to be done to prevent harm to the individuals and small groups seeking to better things through altruism.
    Those folks that quietly raise the alarm, telling a company “Hey, you guys have a problem here. You missed something here,” should never face retribution.
    Yet most of us have a story on this. Maybe not something that has happened to ourselves, but someone we know or interact with (I’ve got a colleague who has been asked not to present at certain conferences because he quietly pointed something out).

    We should not limit ourselves to asking “Has the definition of digital theft changed?”, instead, we should be looking at “Has the rampant pace of technology changed the definition of malfeasance?”
    I don’t think the definition has changed, but the ability of corporations and lobbying groups to cry “Foul!” has certainly increased. Companies are also attempting to use the law to shift product issues to the end-user. Farmers can be sued if their fields show evidence of replanting of crops from previous year’s seed. Broadband users can be sued for pirating without any evidence that the pirated material is in their possession. Simply poking at a product to see if it is secure can bring you up on charges for circumventing copy protection.
    If you can’t put a product out while maintaining your business model, do you have any right to shift the security of your business to an uninterested consumer?

    We need to ensure that it is easy, AND PROFITABLE, for the world to do good. We also need to take a stand that we, as consumers, have certain rights. We also do not have to take on certain responsibilities, if we do not want them. We should certainly never have to increase our responsibilities without fully understanding them (I point to the current state of the art of pirating legislation).

    I think bad guys are always going to exist. They’re certainly going to exist if it is easy to be a bad guy. We’re going to make more and more of them if we prosecute good guys.

Leave a Reply to asdfCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.