My AT&T Uverse Installation Experience

I moved to a new apartment last weekend, and as part of the move I switched from Time Warner cable/internet/phone to AT&T Uverse service.  I made the change because AT&T advertised higher download speeds for a similar price, and my TW connection gave me troubles with Netflix and sometimes Youtube.

With respect to the cable and internet installation I have to say AT&T was stellar.  I had a few issues with my Time Warner installation and none of them were present in this.  My download speeds are within 10% of the advertised speed, and I’m enjoying my service greatly.

The phone installation was a nightmare.  I tried unsuccessfully for two weeks to transfer my number, then had to give up and try unsuccessfully to have AT&T assign me a new number.  Here’s how it worked.

In the 2nd week of August, I called Time Warner and notified them I’d be terminating my service on August 23rd.  They offered me a lower rate, but I really wanted to try Uverse so I continued to cancel.  On the same day, I set up my Uverse service online, and the site told me my number could be ported.  Great!

About a week later, AT&T called me to tell me there was a problem porting the number; apparently the number had a pending order.  I called Time Warner, and the only pending order was the cancellation.  We decided to remove the cancellation order since I could cancel when I turned in my equipment.  I called AT&T and asked them to try again.  I still had about 4 days before moving, so I figured it would be fine.

Nope.  AT&T sent me a few emails the day before the move to indicate there was a problem.  They were trying to port my cell phone number, which had nothing to do with the number I asked them to port.  So I asked them to try again, with the right number this time.

No dice.  AT&T called me the day of my move to let me know there was still a pending order on the number.  Time Warner insisted there were no pending orders.  I couldn’t do anything because I was busy moving, so I put it off until the next day; I still had 2 days until the installation.  I told AT&T to assign me a new number, and after a few minutes the person on the phone told me what my new number would be and I happily continued to wrap up my move.

The day the tech was scheduled to do the installation, I got a call from AT&T.  The robovoice let me know there was a problem porting my number (!) and I’d have to reschedule the voice installation.  What?  I talked to a representative and there was apparently no indication that I’d already been assigned a number.  I asked her to assign me one, and she got confused and decided to remove my voice order.  At this point, the installation tech was at the door so I told her whatever and ended the call.

The installation tech had the number I’d been told I’d be receiving before, and knew nothing of the number port.  We decided to try the voice install and see what happened.  Interestingly enough, the phone halfway worked.  I could dial out, and caller ID would show my old Time Warner number.  But any attempt to dial in failed.  The tech couldn’t really do anything, but told me some stuff to tell the phone support so they wouldn’t waste time sending another technician to “install” the phone kit.  So I ate lunch and started that call.

It was an hour-long ordeal.  The first tech support guy couldn’t really do anything because the woman earlier in the morning had removed voice service from my account.  He transferred me to someone in sales. She added voice service again, and helped me notice that I’d lost the promotional credits to my account in all of this and re-applied them. After doing that, all she could do was schedule a voice installation, but when I explained that I was already installed and just needed a number she went to talk to some supervisors and figured out she could connect me to someone in high-level tech support. This guy was very confused that I was calling him from a cell phone; apparently he thought I was calling him from my voice service that I had already explained I didn’t have.  After I explained (twice) that neither my cell phone number nor the Time Warner number should be ported and I needed a brand new number, he got to work.  Some 20 minutes later, he told me I had a number and read it to me.  We did a test call, it worked, everything’s happy.

The number he read to me isn’t the phone number I have.  It was a kind of final problem, but it wasn’t tough to call my cell phone to figure out what my real number was.

What the heck happened?  How could AT&T screw up phone service this bad, isn’t it their traditional business?  Why did they give up on trying to transfer my Time Warner number and start trying other numbers associated with my account?  Why didn’t tech support on Tuesday know what tech support on Monday had done?  How did my installer get “Give this man number A” without my account having a note of that?  Why couldn’t the final and helpful person read me the number he *actually* assigned to me?

Aside from that, I’d highly recommend Uverse, I just had to vent about the phone process.

Is parameterized logic always harmful?

I’ve read plenty of times that parameterized logic is considered harmful in testable design.  What’s parameterized logic?  It’s stuff like this:

int Calculate(int lhs, int rhs, Operation op)
{
    if (Operation == Operation.Add)
    {
        return lhs + rhs;
    }
    else if (Operation == Operation.Subtract)
    ...
}

From what I understand, testable design would say this method needs to be refactored.  It does multiple things, and the op parameter dictates what happens.   The reason it’s discouraged is you end up encoding contract for *all* of the operations.  For example, if there is an Operation.Divide then the method will throw an exception if rhs is 0.  Testing this method is possible, but will be tedious.  Documenting this method will be a pain as well.

One alternative would be to replace the single Calculate() method with individual methods like Add(), Subtract(), and so on.  Another technique would involve creating an interface for a type that can operate on two numbers and letting Calculate() take one of those:

int Calculate(int lhs, int rhs, Operator op)
{
    return op.Operate(lhs, rhs);
}

Obviously the injection technique is more complicated, but it provides flexibility: in a library it allows the ability for users to provide their own implementations of Operator that perform operations that might not be built in.  Calculate() still has multiple behaviors, but the testing and documentation burden is shifted to the Operation implementations.

So what got me thinking about this?  Someone asked me how to convert to and from hexadecimal strings in .NET.  I realized there’s three ways to do it: Parse(), TryParse(), and Convert class methods.  I don’t like the thought of convincing someone there’s “one true way” to do things, so I outlined all three and noticed that all three techniques use parameterized logic.

The Parse() and TryParse() techniques will parse a number differently depending on the NumberStyles parameter you pass.  For example, “10” will parse as 10 unless NumberStyles.HexNumber is specified; then it will parse as 16.  Likewise, “B000” is a valid input only when the hex number style is specified.  From a testability standpoint, this seems like it’d require a matrix of tests for just the method.  From a documentation standpoint, Microsoft just documents that a FormatException can be thrown without going into any detail (granted, it’s usually easy to figure out why the exception was thrown.)

I feel like Convert’s methods do things differently.  The relevant methods take the string and an integer that represents the base for conversion.  This still introduces the problem that sometimes 10 means 16 and sometimes “A” is invalid.

So what’s different between the two?  From a user’s standpoint, not much.  If you try to guess the implementation, I bet they’re very different.  Byte.Parse() probably looks like this:

static byte Parse(string input, NumberStyles numberStyles)
{
    if (IsSet(numberStyles, numberStyles.HexNumber))
    {
        // parse hexadecimal
    }
    else
    {
        // parse decimal
    }

    // validate the number using the various other styles
}

Convert.ToByte() could probably look like this:

static byte ToByte(string input, int base)
{
    // some algorithm that parses a number with an arbitrary base
}

I could use Reflector and peek at them, but I’m only using these as examples of the implementation styles I’m interested in (and yes, I know ToByte() only accepts 3 bases.)  I feel like ToByte() here is the lesser of two evils because of the internals.  Parse() has rigid rules that require you to walk through each possible path of the code if you want to create a 100% coverage of the logic.  ToByte() implements an algorithm that should work for many values of base, and thus you can write a smaller number of tests to get full logic coverage.

So I feel like depending on how you implement the parameterized logic, you might reduce the testing and documentation burden.  The Parse() implementation requires a suite of tests for every base and many combinations of the NumberStyles parameter.  The ToByte() implementation requires a suite of tests against an algorithm that takes a parameter.  It feels like that ought to reduce the number of tests; I can imagine using Xunit .NET’s theory tests to dramatically reduce the testing effort.

But I don’t think it’s so straightforward as, “Don’t use parameterized logic unless the parameter is an input to an algorithm.”  Consider what the interface for parsing numeric strings would look like if parameterized logic weren’t available:

structure Int32
{
    // ...

    static Int32 ParseHexNumber(...);
    static Int32 ParseHexNumberAllowWhitespace(...);
    static Int32 ParseHexNumberAllowLeadingWhitespace(...);
    static Int32 ParseHexNumberAllowTrailingWhitespace(...);
    static Int32 ParseHexNumberAllowWhitespace(...);
    ...
}

What a mess! I would not call this an improvement, even if it meant the testing/documentation were more clear.  So it looks like parameterized logic is yet another place where the developer must balance ease of use vs. maintainability.  While the pattern used in my Parse() implementation above is definitely a testing and maintenance nightmare, it is not acceptable to force the user to experience pain.

How do you solve multiple asserts?

I’ve been studying unit testing for a few months, and trying to practice a more TDD approach to writing code.  The most influential book that inspired this is Roy Osherove’s The Art of Unit Testing.  It’s a great introduction to the topic of unit testing and serves as a fantastic stepping stone to more advanced discussions on the topic.

One of the things that Osherove warns against is multiple asserts in unit tests.  That is, one should generally avoid writing tests that can fail for more than one reason:

[Fact()]
public void Test_Some_Condition()
{
    Foo() sut = new Foo();
    bool[] expected = { true, false };

    bool[] actual = sut.GetSomeResult(SomeInput());

    Assert.Equals(2, actual.Length);
    Assert.Equals(expected[0], actual[0]);
    Assert.Equals(expected[1], actual[1]);
}

I agree with this sentiment.  Sticking to one assert per test tends to make it easier to figure out what is wrong when a test fails.  If you have multiple asserts, the first one to fail tends to end the test; perhaps 2 or more assertions would have failed but you only get information about one.  (My example is poor for this: if Length is incorrect then clearly the rest of the assertions will fail.)  I’m not really sure I like any of the solutions to this problem.  I’ll look at a few solutions and explain why.

Messages with the asserts

This solution involves attaching string messages to the assertions to make it clear which assertion failed for what reason.  This can help if you have several of the same kind of assertion and the values don’t make it clear which one fails.  I don’t like this technique for a personal reason: I use XUnit .NET and it only allows string messages on its true/false assertions:

[Fact()]
public void Test_Some_Condition()
{
    // ...

    Assert.True(
        2 == actual.Length, 
        string.Format("The length should have been {0}, but was {1} instead.",
            2,
            actual.Length
            )
        );
    // ...
}

This looks atrocious and is difficult to write.  It’s easier in test frameworks like NUnit that provide the string message as part of the assertion.  It’s also tedious to write multiple failure messages; I’d rather write “Arrays were not equal” than multiple individual strings, particularly when dealing with larger arrays that will require string interpolation to give me expected/actual values.

Custom Assert Methods

This is an idea I picked up from XUnit Test Patterns.  Generally, these groups of multiple assertions tend to represent one logical assertion, so you wrap these multiple assertions in a single custom assertion method:

[Fact()]
public void Test_Some_Condition()
{
    // ...

    AssertArraysAreEqual(expected, actual);
}

private void AssertArraysAreEqual(bool[] expected, bool[] actual)
{
    Assert.Equals(expected.Length, actual.Length);
    Assert.Equals(expected[0], actual[0]);
    Assert.Equals(expected[1], actual[1]);
}

This doesn’t address the problems that arise when the order of the assertions hides that multiple failures are happening.  It also doesn’t shed light on the fact that the custom assertion exists; “Assert.Equals failed, Expected: 0 Actual: 1” is not as helpful here as “Arrays were not equal.”  Adding strings to the messages can help a little bit, but I’ve already noted why this isn’t a universally convenient solution.

More Verbose Custom Assert Methods

This is something I tried recently and it burns like fire.  Instead of making multiple asserts inside of the custom assert, I make multiple tests and build a string that describes what tests fail:

private void AssertArraysAreEqual(bool[] expected, bool[] actual)
{
    bool lengthsAreSame = (expected.Length == actual.Length);
    bool elementsAreSame = true;
    for (int i = 0; i < elementsAreSame.Length; i++)
    {
        if (expected[i] != actual[i])
        {
            elementsAreSame = false;
            break;
        }
    }

    if (!lengthsAreSame)
    {
        Assert.Fail("Array lengths are not equal.");
    }
    else if (!elementsAreSame)
    {
        Assert.Fail("Some elements held unexpected values.");
    }

    Assert.Pass();
}

If that doesn’t make your eyes hurt, you aren’t looking hard enough.  It’s big.  It has logic that should probably require its own tests to verify.  It was a pain in the butt to write.  (And what makes it worse is the code I’m testing that spawned this post actually works with 2D rectangular and jagged arrays.)  Note that XUnit .NET doesn’t have an Assert.Fail() equivalent, so I’d really end up using something nasty like “Assert.False(true, “…”)”.  I’m starting to question why I’m using it again.

Multiple Tests

On the “pain of implementation” meter this is pretty high.  To implement this solution, you create an individual test for each assertion:

[Fact()]
public void Foo_GetSomeResult_ValidInput_CorrectLength()
{
    // setup code

    Assert.Equals(2, actual.Length);
}

[Fact()]
public void Foo_GetSomeResult_ValidInput_CorrectElements()
{
    // setup code

    Assert.Equals(expected[0], actual[0]);
    Assert.Equals(expected[1], actual[1]);
}

That doesn’t even fully solve the problem!  Note that “CorrectElements” had to make multiple asserts, and without adding a custom string it’s hard to figure out which elements are wrong if that information is needed.  The setup logic is heavily duplicated; this tends to lead to the need to implement setup methods, which can cloud the scenario under test (particualrly if you use your test framework’s automatic setup/teardown mechanism.)

Am I interpreting a guideline as a rule?

I’m not sure I need to solve the problem every time I see it.  Since all of the solutions I know about have some tradeoffs I don’t want to deal with, perhaps sometimes it’s best to just live with multiple asserts?

I’m not happy with that answer.  I made this post because of some “do something fancy with multi-dimensional arrays” code I’m testing.  I decided to live with multiple asserts at first.  Then, I had a couple of tests fail where the diagnosis would have been easier had I known that more than one assertion was failing.  I tried moving to the verbose custom assert technique outlined above, but the custom assert ended up as a > 30-line behemoth that feels many different flavors of wrong.

Maybe I should just live with multiple asserts in this case, since the techniques to avoid them would cause more problems than they solve?

MSDN Redesign Number Ten Million

Microsoft has updated MSDN again for what feels like the third or fourth time in a couple of years.  Microsoft is really concerned with making sure it’s easy for new developers to pick up .NET, but I don’t think this redesign does it.

I think they missed the mark for true beginners.  I’ve never met anyone that started coding by saying, “I think I’d like to learn fundamentals of a programming language, then fundamentals of a GUI, then read a few dozen HOWTO articles that don’t have a lot to do with real applications.”  Instead, we start by saying something like, “I want to make my own Minesweeper” or “I want to make a Pac-Man clone”.  This redesign falls flat for this user.

The “desktop” page does a good job right up to step 4, the introduction videos.  “Build your first desktop RIA application with Silverlight?”  Why not just say “Build a smurfing smurf smurfer using SMURF?”  I think the video titles focus too much on what tools are used and not enough on getting the developer excited about what he’ll be doing.  Better titles would be “Build a desktop Twitter client with WPF” and “Build a browser or desktop Twitter client with Silverlight”.  When I was new, I’d expect a “my first application” tutorial to be simple.  If I saw a 30 minute “build your first application with WPF” I’d assume that WPF made it really complicated to build simple things.  If instead I saw “Build a Twitter client with WPF” I’d have understood that the video probably used some advanced features that needed explanation.  (On a side note; why on Earth is the thumbnail for the Silverlight code that sets up a WNDCLASSEX?)

Step 5 falls flat by showing a disorganized bullet list of links that are randomly bolded.  Section headers would be nice, but it’d work if similar concepts were grouped together.  Why is the MS Office developer center there?  Why is there a link to the VB developer center but not the C# developer center?  I’m not really sure what Microsoft was going for with these links.

When I first started, I thought MSDN was the appropriate resource to learn about writing Windows Forms applications from scratch.  This just isn’t the case.  MSDN is written in a technical tone and many of the articles assume at least a moderate familiarity with .NET.  There’s articles that help new developers, but they’re buried very deep in the Library hierarchy.  The “Getting Started with Windows Forms” page is exactly what I needed as a newbie who needed to write a WinForms application in a month.  Getting there is simple:

  1. Go to msdn.microsoft.com
  2. Click “Library”
  3. Click “.NET Development”
  4. Click “.NET 4”
  5. Click “Windows Forms Portal”
  6. Click “Getting Started With Windows Forms” (It doesn’t show up in the sidebar like on all of the other pages!)

If I go through the chain that leads me through WindowsClient.net it’s just as bad.  I get 3 links to help with Windows Forms: blogs (which are 99% about WPF), a guided tour of WPF (WTF?), and a videos page.  The videos page leads with “How Do I: SqlAzureLOB Line of Business” (mandatory “ATM machine” or perhaps “WPF Windows Presentation Foundation GUI Graphical User Interface” joke).  Is that what Microsoft’s research has indicated helps new programmers understand how to write applications?

I understand that WinForms is supposedly on its way out.  I haven’t noticed a decrease in students using it on the forums I frequent.  I gave the WPF links a whirl, and they’re in better shape, but the video page is useless to a newbie.  The first video on the page is “Create WPF Master-Detail UI Using Data Sources Window Object DataSource”.  I’ve used WPF for nearly 2 years now and I don’t know what “Data Sources Window Object DataSource” is supposed to mean.  Why isn’t MS leading with “Creating a Standard WPF Application” and listing the videos in increasing difficulty order?  Why isn’t there a “difficulty” or “intended audience” field that can sort the videos?

What I would change

The MSDN Library is most useful for a professional developer, but I feel like newbies should be made aware of the documentation.  The “How Do I” videos look nice, but they run the gamut from beginner to advanced with no indication as to their level.  They should have a level and they should be sortable by this level.  Users should be allowed to rate them and comment on them as well so newbies can be warned when someone labels “Write a SharePoint application using AzureCloudSqlSharePointLOBPOSWCF and ham" as a beginner video.

MSDN should really promote their forums as well.  I learned at least 80% of my WinForms knowledge from forums.  As part of promoting the forums, MSDN should also clean up the forums; my suggestions for that could easily fill another post.  The summary: you don’t need to dazzle newbies with over 100 different forums.  Make a “Basic .NET development” page with five forums: WinForms, WPF, Silverlight, Native, and ASP .NET.  Don’t make newbies have to decide if their question should go in “C# general”, “VS 2010 Development”, “WPF Development” or “.NET 4 WPF Beta Discussion”!

I think Microsoft should find a way to promote their blogs better as well.  It’d be nice if they kept an archive of the good blog articles rather than using an RSS feed of the most recent ones.  Pete Brown’s blog is chock full of great Silverlight tips ranging from beginner to advanced, but you wouldn’t know this from browsing MSDN.

A redesign won’t solve these problems.  Video in and of itself is not a solution to the problems that need to be addressed.  If Microsoft wants to help newbies, they need to buckle down and put some effort into designing an intuitive and accessible experience.  Right now the only reason I can find anything at all is my 8 years of experience with wrestling MSDN.

Start gVim with Tab Pages or Windows

I’ve recently started using gVim more when I work with text files, and it makes me sad that I ever quit using it.  I’ll write more about why I think Vim is awesome in another post; this is another “It took me a few minutes to find this so I hope I help someone else” post.

If you want to start gVim with tabs, the command-line switch is -p[n]. If you don’t specify n, the default is "1 tab per file you specify". If you specify n < w files, the files that don’t have tabs will be opened as buffers (see :help buffers for that; it’s a good topic for another post.) I’m not sure what happens for n > files; I assume it opens n tabs.

If you want to start gVim with windows, the command-line switch is -o[n]. It behaves like the tabs command-line switch.

For those that prefer examples (I know I do!):

gvim foo.txt bar.txt baz.txt
gVim opens with foo.txt in the primary buffer and the other files in other buffers.
gvim -p foo.txt bar.txt baz.txt
gVim opens with 3 tabs, one tab for each file.
gvim -p2 foo.txt bar.txt baz.txt
gVim opens with 2 tabs. foo.txt and bar.txt are in tabs, baz.txt is in a buffer.
gvim -o foo.txt bar.txt baz.txt
gVim opens with 3 windows, split who knows how.

The rest of the cases seem trivial to understand with these explanations.

Is it right to drill so close inland?

Yesterday I asked this question and left it unanswered because I didn’t want to rush.  I’ve spent a little time thinking about it and I know what I want to say.

First, a disclosure.  The oil industry has been very good to my family for generations.  I don’t know exactly what my great-grandfather did for his company but I know he was high in the org chart and supervised some things internationally.  My father has worked for the same company for all of my 26 years, possibly longer.  It’s not one of the big oil companies, but it’s still a large one that works internationally.  So it’s possible that I have a slightly biased view of the oil industry; it fed me for decades.

One can definitely argue that it is ethically irresponsible to put populated areas at risk, but that assumes that the risk of a blowout and irreversible failure of a rig is something calculated to be frequent.  When’s the last time this happened?  I found an oil rig disaster site that seems to do a good job of sorting it out.  If you choose “blowouts”, we’ll be doing an apples-to-apples comparison.  The worst blowout (not counting the current one) leaked 3.5 million barrels of oil in 1979; it took 9 months to cap.  It seems this is the most comparable to the current situation; the rest leaked less than 100,000 barrels and were capped within 2 weeks.  There are other notable blowouts on the page but no figures for leaks are given; it seems that most of these killed people but were capped quickly enough to make the leak negligible.  Based on these figures, we can see that there hasn’t been a blowout of this magnitude for 30 years.  I won’t try to derive the odds of one of these blowouts happening, but it seems minuscule and conversation with my father seems to agree with this.

One may argue that any risk of such a disaster is enough to avoid drilling close to shore; it’s a fair argument.  I’m not sure how economically feasible this is.  Part of why we aggressively drill is because the country consumes oil at an alarming rate.  I found a chart showing average commutes per state from a 2007 US Department of Transportation survey.  The mean travel time to work in the US is 25.3 minutes.  76.1 of America drives alone to work.  Here’s a fuel economy table.  The average fuel efficiency has been consistently near 22 MPG for 20 years, while maximum fuel economy has been slowly increasing beyond 30 MPG.   That’s an awful lot of waste, and you can see a very long lag between the current year model’s average fuel efficiency vs. the nation’s average; that’s a lot of waste.  And that’s just cars.  How many of us run air conditioning too much?  How many of us don’t have solar panels on our roof?  How many of us run 2 or more TVs in the house on the same channel?  I’ve got 12 entertainment-related devices plugged into the wall at all times, how many does the average American have? How many of us run the heater too much in the winter?  We’re developing electric cars, but the electricity can come from oil-fueled power plants; we just shift how the oil is used in this case.

What I’m getting at is we can’t just shut off offshore drilling and adjust to more expensive oil.  Entire suburbs cannot relocate closer to cities without decades of work towards providing affordable housing within the cities.  Public transportation has enough trouble making money without having to serve everyone in a 30 mile radius.  Will America cooperate with cutting their electricity usage by a drastic percentage like 25%?  Will homeowners pay upwards of $10,000 per home to install solar panels en masse?  Will we spend billions and wait decades for nuclear to trade risk of oil disaster for risk of radioactive disaster*?

If we decide that offshore drilling near land is unethical, America will have to dramatically alter a lifestyle that has been lived since the 50s.  It’s not something we can do quickly, either.  Most of the changes we’d need to make require at least a decade of work and funding we’re currently spending to set buildings and people on fire in other countries.  I see a growing sentiment for the nation to become “green”, but as is typical most people only want to be as green as they can get without giving up any luxuries.  We’re going to have to give up a lot of them to end offshore drilling.  Until then, we’ll continue to look the other way and hope rigs don’t fail.  We do this for a lot of issues, and it’s a shame.

* Radioactive disaster is probably at most as likely as a spill of this magnitude.  Wikipedia’s list of nuclear disasters shows that Three Mile Island in 1979 was the last failure that resulted in anything more dangerous than “required maintenance to repair damage”**.  By that measure, I suppose you could call nuclear safer.  Anything I say to disagree felt like I was being biased, so I won’t comment further.

** For this post I’m not going to consider international nuclear disasters, since a disaster in another country isn’t likely to have as large an impact on America. Chernobyl was in 1986; there was so much human error and borderline incompetence related to this disaster I’m not sure it counts.  I’ve heard that the reactor design was inherently unsafe as well, and a modern reactor wouldn’t be at risk for the same disaster.  I’m not a nuclear engineer, so I cannot verify.

The Gulf Oil Spill

What a fine mess.  I’m mostly uninformed about the spill because the only source of news I really follow is “people talking on Twitter”.  I got a little angry about what some people were saying the other day and @hotgazpacho called me out for being wrong.  So I decided to stop being lazy and read a bit about it.  It still didn’t answer most of my questions.

I decided to call my dad and see what he had to say about it.  He worked on an oil rig 20 years ago, and is an environmental/safety engineer for a refinery (not BP.)  He’s the person responsible for supervising cleanup when one of their pipelines leaks or a truck spills.  I figured between his roughnecking experience and training in cleaning up oil spills he could give me a better idea of what happened.  I didn’t really ask him if I could post this, but there’s nothing particularly dangerous.  Nothing he said is meant to represent his company or anyone else in the oil industry.

He’s better at being a scientist than I am: many of his answers were, “I don’t know.”  That’s fair enough; it’s a different company with different policies and he hasn’t been on a rig for a long time so there’s plenty of uncertainty.  It’s also hard to interpret everything because all we get is what the news reports: context can be lost and words can be misquoted.  But there were a few interesting points he made.

He said what he has read about the events leading up to the explosion don’t make sense.  A blowout (oil/gas under pressure pushes the drilling rig back out) is the most likely cause.  He also said that even 20 years ago it wasn’t difficult to see a blowout coming.  He mentioned that there were plenty of pressure gauges to watch and if you hit a gas pocket and you knew something bad was coming you could start the failsafes (he used some jargon for these failsafes but I don’t remember it exactly; I’m using the laymen terms he explained.)  The first failsafe clamps the rig shut to try and prevent the pressure bubble from reaching the surface.  If that doesn’t work, the next failsafe cuts the drilling bit and lets it fall (I’m not sure how that helps but didn’t ask.)  The last resort is to cap the rig; this renders it unusable but is preferable to a blowout.  He seemed certain that that if these measures were taken, the explosion could not have happened.  Applying Occam’s razor we agreed that either someone didn’t notice the pressure at all, they didn’t notice it soon enough, or an equipment failure rendered the failsafe measures inoperable.  He felt that since at least 11 men were near the rig when it exploded they must not have known it was happening.  He also mentioned that some of what BP has said doesn’t make sense, in particular they’ve apparently claimed that there were “troubles” applying one of the failsafes in the past but he couldn’t interpret what that meant.  I got the feeling this meant if they’d applied the failsafes in the past the rig should have been deemed unsafe or the equipment replaced.  This sounded like something that likely got mistranslated between BP and the press; the representative probably used jargon and the reporter may have misunderstood.

I asked him why the cleanup seemed so ineffective.  The primary weapon used in an oil spill is a boom; this is like a tarp with a float at the top and weights at the bottom.  This forms a barrier at the surface of the water and since oil floats the boom can contain the oil.  Once the slick is contained, a variety of methods are used to get the oil off the surface of the water or break it down.  Unfortunately, booms are not effective in the open seas.  The typical boom is about 10 inches tall the last time I saw one; when the waves can be 3-8 feet the seas will just throw the water over the boom.  He also pointed out it’s very likely that if every US oil company combined their powers, there’s not enough boom or ships to contain a slick this large. He seemed to believe the coast guard should have burned the slick sooner.  He did point out that he’s more used to cleaning up spills on land or in creeks and rivers and an ocean spill is a completely different beast.  He qualified most of his opinions with caution, pointing out that he doesn’t know everything since he’s not working for them and all he gets is what the news says.   However, he also feels like the size of the slick combined with the fact that BP hasn’t been able to shut off the rig indicates it may not have mattered.  BP’s fiddling with something at the drill site beneath where the rig was erected to attempt to stop the flow of oil, but it isn’t working.  Dad worried this indicates the leak is beneath the drill site and if this is the case there’s not much that can be done.   That’s troubling.

The news was quick to cover the Morgan City rig tipping over and framing it as equal to the one that sank.  He said this is hyperbole; the Morgan City rig was being towed to a new location and thus no oil was spilled.  It was carrying diesel fuel but in the end it’s probably about as disastrous as if someone sank their yacht, particularly when you consider the magnitude of what’s happening nearby.

In the end, it sounds like it’s going to be another case study in engineering ethics books.  Dad feels like BP’s going to blame at least some of the dead for making a mistake that triggered the event, but it’s more likely it was one of those rare combinations of several failures that leads to a disaster.  It happens.  Even when there’s a one in ten million chance of failure, that one failure will eventually happen.  I’ve read articles that state it well: the news doesn’t report when a worker flips the wrong switch at the power plant and three failsafes were defective but the fourth prevented disaster.  It’s when that fourth failsafe happens to fail that we see the news.  Everything can fail.

Cleanup’s going to be bad.  The morbid jokes at the office are that if you enjoy Destin and the Gulf Coast, you’d better get down there before the slick gets too thick.  It’s true.  Cleanup takes a long time.

Is it wrong for us to drill so close inland?  Why wasn’t BP prepared for a spill of this magnitude?  Why didn’t we have a plan for a disaster of this scale?  These points are too complicated for me to rush through.  I’m going to think more about them and post about them separately.

Infinite Space

Infinite Space is an RPG for the Nintendo DS published by SEGA.  It was developed by Platinum Games, a publisher with a stellar track record if you count their Clover days as well.  Here’s what you’d expect from the back of the box and preview material:  Infinite Space is an RPG set in space.  Over the course of the story, you interact with hundreds of characters on as many planets.  There are dozens of ships that can be customized with hundreds of modules; trick out your fleet to handle a myriad of situations.  Now I’ll analyze how Infinite Space lives up to these expectations.

Story

Infinite Space shines at storytelling.  The overall story isn’t going to win any prizes, but it’s presented through interactions with a very diverse cast of relatively believable characters.  Sure, they padded the roster with a few dozen one-dimensional characters, but there’s a handful of main characters that actually have a decent amount of character development.  The story is what kept me playing the game. 4/5 for story, since it adequately propped up what the rest of the game lacked.

Graphics/Sound

The audio/visual presentation is severely lacking.  Someone needs to tell developer that the DS is just not a good platform for 3D games that want detailed textures.  The 2D story scenes are really well done; I think the game would have been visually stunning if they’d used sprites instead of 3D models for ships.  The sound effects consist of beeps, boops, and pew pews that would have been substandard during the 16-bit era.  Worse, some of them have a much louder volume than others.  I think the music tracks were good, but the game was usually too busy overlaying it with radio static, beeps, and low-quality explosions for me to notice.  2/5 for graphics and sound.

Gameplay

(Gameplay is the most important factor, I will spend the most time discussing it.)

Everything is a nested series of menus.  Rearranging ship modules is something you’ll want to do frequently; doing so is a 12-step process where 2 of the steps are “select a module” and “find somewhere to put it.”  The rest of the 10 steps are generally “do something” followed by answering, “Are you sure you want to do something?”  There’s no need for this continuous confirmation.

Combat is a pillar of any RPG; you usually spend most of your time in combat.  Infinite Space’s combat is disappointing and shallow.  You might expect several different types of weapons and armor forming a complex matrix of strengths and weaknesses (think Pokemon.)  Instead, you have a choice between normal, barrage (3x normal), and dodge.  Dodging nullifies the damage from barrage and causes you to take more damage from normal.  Weapons and modules have a marginal impact on your damage output.  The most important factor in any combat is “have you dodged yet” or “has the enemy dodged?”  Later, you gain fighters and special combat abilities, but rather than adding depth these just crank down the difficulty by letting you sit out of range and whittle away at the enemy.  Yawn.

The game lacks a quest log or any other mechanism to track your current goals.  You’re screwed if you forget what planet you were asked to visit, particularly if you forget who told you to visit it.  Occasionally the action you need to take to continue the questline is obtuse.  In some cases, crew members provide hints after several failures, but this is a rarity.  There’s no excuse for a lack of a quest log in a modern RPG, and hint systems can be highly rewarding.

Playing Infinite Space was a very frustrating ordeal, mostly due to game mechanics that make combat vary from “coma-inducing” to “requires you to solve a puzzle”.  1/5 for gameplay.

Challenge

The opening chapters of the game ramp the difficulty quickly and ultimately punish you if you try to grind your way to success.  If you buy better ships, you’ll be disappointed when the next boss fight forces you to melee.  One boss fight required me to enter 2 battles without firing weapons, *then* survive the boss battle with no way to heal between any of the battles.  It seems the developers realized they didn’t have a good way to increase the difficulty, so they resorted to crippling your fleet before most boss battles.

At the halfway point, this changes dramatically.  After a couple of particularly epic boss battles, the difficulty plunges into easy mode.  The endgame ships are at least twice as powerful as necessary to sleep through the boss fight.

In the end, the most challenging aspect of the game was deciphering how the mechanics work.  The manual is so sparse that SEGA provided an auxiliary manual for download.  It still leaves many gaping holes.  Some of the more important ship statistics are invisible.  Sometimes the description of a skill or statistic is locked until several hours into the game.

2/5 stars for providing a lopsided challenge and making the UI a big component of it.

Conclusion

Infinite Space has the makings of a fantastic game wrapped in the trappings of a game that was rushed out the door.  The game lures you in with promises of a deeply customizable combat system, then delivers rock-paper-scissors where you can change the color of your fist.  The detailed character art and likable story is sufficient motivation to motivate a veteran of console RPGs into finishing.  If half the effort spent on these aspects had been spent on combat, I’d wholeheartedly recommend this game.  Instead, I say it’s 2/5; a poor game propped up by a decent storyline.   If you aren’t a JRPG fan, stay away.

The Flaming Lips Part 2: The Prequel

Here’s part 2 of my The Flaming Lips concert post, in which I discuss the pre-show stuff.  I’ll talk about Austin Music Hall in general some other time; Yelp’s reviews are pretty accurate for now.

Pre-Show

This got too big and I just deleted two paragraphs.  The pre-show experience was heightened when AMH let people over 21 in about an hour before the posted door opening time.  This moved me from about 100th in line to more like 30th in line.  Hooray for age perks!  They had a lobby with a bar that they let us mill about in; this is most likely to combat the bars across the street who had competitive pricing on beers.  Can’t let your concert customers pay someone else for beer!  I got a water because I’m too lame for beer.  After they let us in (only about 10 minutes late; not bad) I made a beeline for where I thought would be the best position.  Then we watched the tech crews get stuff ready.

The crowd was pretty nuts.  I saw Santa Claus, people in bunny suits (the white fuzzy kind, not the sexy-lady-in-fishnets kind), and one guy that I thought was a bunny was actually a unicorn.  That guy was front and center in the mosh pit.  Rock on, Mr. Unicorn.  There were lots of silly hats, and I was disappointed that I hadn’t known this was going to be a common thing.  I have a DJ Lance hat that I would have worn, and it would have turned out to be extraordinarily appropriate.  Oh well.

Warmup Band

The warmup band was Stardeath and White Dwarfs.  The art on their site reminds me of TFL, they’ve collaborated with TFL, and the music was complementary to TFL’s music.  The stage was crazy; lots of strobes, lots of spotlights, lots of fog, and this weird sparkly LED thing was set up behind the drummer.  I’m not sure if this is something they usually have or if it’s one of the benefits of opening for TFL, but it definitely made the show more enjoyable.  I liked them, and I think I wouldn’t mind seeing them as the headliner one day.  I’m buying their album soon and I’ll see if that opinion changes.

I feel sorry for them because the crowd wasn’t really enthusiastic.  Part of it was their fault; they didn’t really say much to the crowd.  The only thing I remember is “One more song and then TFL will come out!”  I think if they’d have tried a little harder to get the crowd into their show they could have got a better response.  On the other hand, everyone was there to see TFL so getting the crowd interested was definitely a tough job.  They weren’t booed, but they only got scattered claps and cheers after each song.

The Transition

After S&WD left, I was prepared to spend the next 30 minutes or so watching groupies and techs set up the stage for TFL.  That’s how it’s worked at all the other concerts I’ve been to so far.  Those bands are losers. Within minutes, Wayne Coyne himself was on the stage fiddling with the mic and guitars.   Techs were still doing things (one of them fell off the stage at one point, poor guy!) After a while, the rest of the band was on stage, and we still had 20 minutes or so before the show.

Soon, I saw the red dot of a laser pointer and started getting agitated by it.  Then I turned around and saw a DJ Lance handing out huge handfuls of them.  I went to investigate and she claimed it was an interactive part of the show and asked if I’d hand some out for her.  So I scooped up a huge handful and did.  The next 10 minutes or so were spent shining lasers at random things.  Then the show started.  But I’ve already talked about that!

Next time: I think I’m going to review Infinite Space, the DS game I’m playing.

Miniblog

I started a Tumblr blog.  I’ve had an account on Tumblr for a few weeks now, and mostly what I did with it was try to find out what kind of purpose I might find for it.  After fooling around with it and getting comments installed on it, I decided I’d found a niche.

My Tumblr blog is for the little things I find or think of that are too small for blog posts, but require more discussion than a tweet.  I’ve got a huge instapaper backlog of sites that I’m supposed to read later.  My del.icio.us list is tainted because coworkers follow it, so it’s more business than personal.  I needed something that I could use to find the little Youtube videos, quotes, and sites that I want to look back on every now and then, and Tumblr fills this need.

So the hierarchy of information about me is as follows:

  • My Twitter account is for brief thoughts, discussions with other people on Twitter, and random retweets of links that look interesting.  Twitter doesn’t archive your stuff forever, so it’s all throwaway.
  • Links, quotes, or thoughts that I want to stick around forever go to Tumblr.  In particular, if I have a somewhat long comment or want to call out a section of a page I’ll use Tumblr.
  • This blog is where longer, more organized thoughts go.

I’m happy with this, it’s cleared up some of the problems I had with using instapaper and del.icio.us.

The next long post will be about the band that played before The Flaming Lips, followed by comments on the venue.  Both will be much shorter than the Flaming Lips blog post.  I’m going to wait a few days because I want this to have some visibility.