Friday, May 14, 2010

I can't stop watching this

Ever since Kevin shared this with me, I've been watching it constantly.





I'll have an end-of-the-semester post some time soon, I promise :-) Followed by a new tech post.

For those who are wondering, yes, the paper that was referenced to in the last post got accepted :-) Now waiting to hear back on another one (sigh).


-SM

Saturday, February 20, 2010

Rage.



I has it.

Waiting for paper notifications is always a pain. I do my best not to think about it, and focus my energy instead on the current algorithm I'm working on. This semester is a special case, since I should be hearing back about two papers in the next two weeks. Both are solid pieces of research and I'm incredibly proud of having been a part of both of them. At this point though, I am all too familiar with the paper reviewing process, and the endless submissions. Half the time, it's not clear to me the reviewers are actually reading the damn things... Each reviewer has so many papers to look at, and a seemingly impossible amount of time to look at it all in. Sometimes I want to scream out the same things ResearchCat is spouting above.


The tentative caption for the above picture is currently "ResearchCat on EasyChair".

And yes, I may have more ResearchCat pictures in the future... you didn't think LOLQualCats was the end, did you?


-SM

Wednesday, August 26, 2009

An update of sorts

What can you say about an eighteen-month old laptop that died? My attachment to my Ideapad could be crafted into a Love Story in itself, prematurely cut short when my laptop died earlier this week. I had a Lenovo Y410, bought when the Ideapads first came out. I paid close to $900 for it, getting along with it a 250GB SATA hard drive, 2 GB of RAM, and an Intel Core 2 Duo processor. It also had a 14.1" glossy widescreen monitor, facial recognition software, a dolby sound system and an integrated webcam. It was a good machine for the price. An the harddrive came already partitioned, allowing me to install Ubuntu with ease.

And now it was gone.


I had done several things that now looking back was very foolish of me. As a graduate student then, about to finish her M.S. and potentially move into HPC, I wanted a laptop with two cores,so I could run MPI on it. I was also hurting for money; I did not get the extended warranty. The machine came with a one year warranty, and I naivelly though that a.) if anything was defective, it would break within the first year, and b.) I would remember to buy an extended warranty anyway, before the warranty period was up.

Now, with the machine four months out of warranty, I bitterly look back and realize how wrong I was. The motherboard was shot: when I plugged in the machine the battery would no longer charge (and yes, I did confirm it had nothing to do with the AC adapter). Thinking back, I always remember that I had an issue with the way the charger plugged in on this laptop. It was a tight fit; whenever I unplugged the damn thing, the sensation was more akin of having dislocated a shoulder than having removed the plug from the machine. This sensation of the charger being ripped out of my laptop's socket should have tipped me off that something could seriously be wrong. Alas, I ignored it.

The motherboard would cost almost $200 to replace, plus I had to pay Lenovo's $50 out-of-warranty penalty. This did not include the cost of installation and taxes, which would be a lot extra. I had a choice now: I could either fork over $250+ dollars or buy a new machine. But could I afford a new machine? If I fixed my current machine, I'd have to fork over $250 plus realize that if the machine breaks again, I'd have to pay for parts and the $50.00 out-of-warranty charge. With the technology progressing the way it is, is it really worth it to pay to fix this machine?

Lucky for me, I happened to be in NYC on vacation when this occurred.Since classes start next week, I knew that if I needed to buy a new machine, there would never be a more perfect opportunity. Even luckier, I was reminded about this wonderful deal at J&R: A refurbished T42 laptop, for a measly three hundred dollars. For an additional $90.00 I'll get a 2 year warranty. Well, why not? While some may sneer that this is a serious downgrade, it may not be for me. Let's put the two machines side by side:





Lenovo Y410


  • Intel Core 2 Duo, 1.66Ghz 2MB L2
  • 2 GB DDR2 RAM
  • 250 GB SATA HD
  • Intel X3100 Integrated graphics card
  • 14.1" Widescreen VibrantView screen, glossy
  • Full complement of ports
  • Sound system(?)
  • Integrated Webcam
  • 5.6 lbs
  • Windows Vista Home Edition

ThinkPad T42


  • Intel Pentium M 1.7Ghz 2MB L2
  • 1.5GB DDR RAM
  • 40 160 GB IDE HD (thanks Kevin!)
  • ATI Radeon 7600 32MB graphics card
  • 14.1" TFT Active matrix display (1024x768)
  • Full complement of ports
  • Trackpoint button
  • Better keyboard (and webcam -- thanks Ethan!)
  • 4.9 lbs
  • Windows XP Pro



So specs are specs, but specs outside the context of usage mean nothing. If I was super-gamer, a laptop power-user, or even if I were expecting to the majority of my development on this machine, this would, in truth, be a downgrade. The truth is, my computational needs 1.5 years ago is radically different from my current needs.

My primary machine is a quad-core sitting in my office. All of my development is done on this machine, and any basic testing is done on this machine. Actual benchmarks are run on a remote cluster. In other words, anything requiring HPC performance is already taken care through my desktop or the cluster. So power is moot.

So what do I use my laptop for? These days I'm either using it to work remotely or messing around on the internet. Thus, I require only the basics for development. Will this machine suffice?

Let's do a run down on the specs. We first start with the processor. The Pentium M chip was designed for mobile computing efficiency. Reasonably fast and power efficient, it's probably the most I could ask for from a single core option. Clearly, a dual-core solution is more powerful, but do I need that power? The laptop comes with Windows XP, not Vista, and therefore should be fine if I ever decide to use Windows. Secondly, I do most of my work in Ubuntu linux anyway, so efficiency is not necessarily an issue.

Next we have a memory and hard disk. What really is the practical difference between 1.5 and 2 GB? How much usability does that extra 512MB really give you, especially in linux? I'm sure with gaming, every extra megabyte counts, but for what I'm using it for, it probably doesn't matter. While the 40GB hard drive is woefully small, I could subsist on that. The 250 GB SATA hard drive in my old laptop is now sitting comfortably in its new external enclosure, so if space really becomes an issue, I can rely on that. Lastly, as an early birthday gift, my wonderful boyfriend gave me a new 160GB IDE hard drive for this laptop. Freakin' yes.

There are a couple of things I wasn't too thrilled about. For one thing, the maximum resolution on this machine is only the same as what was there on my T30, and the video card is not that great. There was also the issue of the webcam, but my good friend Ethan was kind enough to give me his (you're amazing, Ethan!). But for under $500, I got a great machine with a 2 year warranty that interfaces great with Ubuntu.

Ask me later about my experience at J&R. It took me a few iterations to get the machine I actually wanted, and the Windows install they gave me was defective at best. Thankfully, as an ACM student member I'm entitled to free software, including Windows XP Pro. It's so good to be a student sometimes! Now I have a Windows disk that I can use if necessary. What more can I ask for?

Wednesday, September 10, 2008

Running Hadoop on OS X 10.5 (64-bit) single node cluster

These instructions are for installing and running Hadoop on a single node cluster. This tutorial follows the same format and largely the same steps of the incredibly thorough and well-written tutorial by Michael Noll about Ubuntu cluster setup. This is pretty much his procedure with changes made for OS X users. I also added other things that I was able to piece together after looking up things from the Hadoop Quickstart and the forums/archives.

Step 1: Creating a designated hadoop user on your system


This isn't -entirely- necessary, but it's a good idea for security reasons.
To add a user, go to:
System Preferences > Accounts
Click the "+" button near the bottom of the account list. You may need to unlock this ability by hitting the lock icon at the bottom corner and entering the admin username and password.
When the New account window comes out enter a name, as short name and a password. I entered the following:
Name: hadoop
Short name: Hadoop
Password: MyPassword (well you get the idea)

Once you are done, hit "create account".
Now, log in as the hadoop user. You are ready to set up everything!

Step 2: Install/Configure Preliminary Software



Before installing Hadoop, there are several things that you need make sure you have on your system.

1. Java, and the latest version of the JDK
2. SSH

Because OS X is awesome, you actually don't have to install these things. However, you will have to enable and update what you have. Let's start with Java:

Updating Java
Open up the Terminal application. If it's not already on your dock, you can access it through
Applications > Utilities > Terminal
Next check to see the version of Java that's currently available on the system:
$:~ java -version
java version "1.5.0_13"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05-237)
Java HotSpot(TM) Client VM (build 1.5.0_13-119, mixed mode, sharing)

You may want to update this to Java Sun 6, which is available as an update for OS X 10.5 (Update 1). It's currently only available for 64-bit machines though. You can download it here.

After you download and install the update, you are going to need to configure Java on your system so the default points to this new update.
Go to Applications > Utilities > Java > Java Preferences
Under "Java Version" hit the radio button next to "Java SE 6"
Down by "Java Application Runtime Settings" change the order so Java SE 6 (64 bit) is first, followed by Java SE 5 (64 bit) and so on.
Hit "Save" and close this window.

Now, when you go to the terminal, and type in "java -version" you should get the following:
$:~ java -version
java version "1.6.0_05"
Java(TM) SE Runtime Environment (build 1.6.0_05-b13-120)
Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_05-b13-52, mixed mode)

and for "javac -version":
$:~ javac -version
javac 1.6.0_05


Onto ssh!

SSH: Setting up Remote Desktop and enabling self-login
SSH also comes installed on your Mac. However, you need to enable access to your own machine (so hadoop doesn't ask you for a password at inconvenient times).
To do this, go to System Preferences > Sharing (under Internet & Network)
Under the list of services, check "Remote Login". For extra security, you can hit the radio button for "Only these Users" and select hadoop

Now, we're going to configure things so we can log into localhost without being asked for a password. Type the following into the terminal:

$:~ ssh-keygen -t rsa -P ""
$:~ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Now try:
$:~ ssh localhost

You should be able to log in without a problem.

You are now ready to install Hadoop. Let's go to step 3!

Step 3: Downloading and Installing Hadoop



So this actually involves several smaller steps:

1. Downloading and Unpacking Hadoop
2. Configuring Hadoop
3. Formatting and Testing Hadoop

After we finish these, you should be ready to go! So let's get started:

Downloading and Unpacking Hadoop
  • Download Hadoop. Make sure you download the latest version (as of this blogpost, 0.17.2 and 0.18.0 are the latest versions). We call our generic version of hadoop hadoop-* in this tutorial.

  • Unpack the hadoop-*.tar.gz in the directory of your choice. I placed mine in /Users/hadoop. You may also want to set ownership permissions for the directory:

    $:~ tar -xzvf hadoop-*.tar.gz
    $:~ chown -R hadoop hadoop-*

    Configuring Hadoop

    There are two files that we want to modify when we configure Hadoop. The first is conf/hadoop-env.sh . Open this in nano or your favorite text editor and do the following:

    - uncomment the export JAVA_HOME line and set it to /Library/Java/Home

    - uncomment the export HADOOP_HEAPSIZE line and keep it at 2000

    You may want to change other settings as well, but I chose to leave the rest of hadoop-env.sh the same. Here is an idea of what part of mine looks like:

    # Set Hadoop-specific environment variables here.

    # The only required environment variable is JAVA_HOME. All others are
    # optional. When running a distributed configuration it is best to
    # set JAVA_HOME in this file, so that it is correctly defined on
    # remote nodes.

    # The java implementation to use. Required.
    export JAVA_HOME=/Library/Java/Home

    # Extra Java CLASSPATH elements. Optional.
    # export HADOOP_CLASSPATH=

    # The maximum amount of heap to use, in MB. Default is 1000.
    export HADOOP_HEAPSIZE=2000


    The next part that we need to set up is hadoop-site.xml. The most important parts to set here are hadoop.tmp.dir (which should be set to the directory of your choice) and to add mapred.tasktracker.maximum property to the file. This will effectively set the maximum number of tasks that can simulataneously run by a task tracker. You should also set dfs.replication 's value to one.

    Below is a sample hadoop-site.xml file:

    ---------------
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/Users/hadoop/hadoop-0.17.2.1/hadoop-${user.name}</value>
    <description>A base for other temporary directories.</description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    <description>The name of the default file system. A URI whose
    scheme and authority determine the FileSystem implementation. The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class. The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
    <description>The host and port that the MapReduce job tracker runs
    at. If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
    </property>

    <property>
    <name>mapred.tasktracker.tasks.maximum</name>
    <value>8</value>
    <description>The maximum number of tasks that will be run simultaneously by a
    a task tracker
    </description>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>Default block replication.
    The actual number of replications can be specified when the file is created.
    The default is used if replication is not specified in create time.
    </description>
    </property>

    </configuration>
    -----------

    Now to our last step!

    Formatting and Testing Hadoop

    Our last step involves formatting the namenode and testing our system.

    $:~ hadoop-*/bin/hadoop namenode -format

    This will give you output along the lines of

    $:~ hadoop-*/bin/hadoop namenode -format
    08/09/14 21:22:14 INFO dfs.NameNode: STARTUP_MSG:
    /***********************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = loteria/127.0.0.1
    STARTUP_MSG: args = [-format]
    ***********************************************************/
    08/09/14 21:22:14 INFO dfs.Storage: Storage directory [...] has been successfully formatted.
    09/09/14 21:22:14 INFO dfs.NameNode: SHUTDOWN_MSG:
    /***********************************************************
    SHUTDOWN_MSG: Shutting down NameNode at loteria/127.0.0.1
    ***********************************************************/


    Once this is done, we are ready to test our program.

    First, start up the DFS. This will start up a TaskTracker, JobTracker, and DataNode on the machine.

    $:~ hadoop-*/bin/start-all.sh

    As input for our test, we are going to copy the conf folder up to our DFS.

    $:~ hadoop-*/bin/hadoop dfs -copyFromLocal hadoop-*/conf input

    You can check to see if this actually worked by doing an ls on the dfs as follows:
    $:~ hadoop-*/bin/hadoop dfs -ls
    Found 1 item
    /user/hadoop/input %ltdir> 2008-09-11 13:33 rwxr-xr-x hadoop supergroup

    Now, we need to compile the code. cd into the hadoop-*/ directory and do:
    $:~ ant examples

    This will compile the example programs found in hadoop-*/src/examples

    Now, we will run the example distributed grep program on the conf program as input.

    $:~ hadoop-*/bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

    If this works, you'll see something like this pop up on your screen:
    08/09/13 20:47:24 INFO mapred.FileInputFormat: Total input paths to process : 1
    08/09/13 20:47:24 INFO mapred.JobClient: Running job: job_200809111608_0033
    08/09/13 20:47:25 INFO mapred.JobClient: map 0% reduce 0%
    08/09/13 20:47:38 INFO mapred.JobClient: map 13% reduce 0%
    08/09/13 20:47:39 INFO mapred.JobClient: map 16% reduce 0%
    08/09/13 20:47:43 INFO mapred.JobClient: map 22% reduce 0%
    08/09/13 20:47:44 INFO mapred.JobClient: map 24% reduce 0%
    08/09/13 20:47:48 INFO mapred.JobClient: map 33% reduce 0%
    08/09/13 20:47:53 INFO mapred.JobClient: map 41% reduce 0%
    08/09/13 20:47:54 INFO mapred.JobClient: map 44% reduce 0%
    08/09/13 20:47:58 INFO mapred.JobClient: map 50% reduce 0%
    08/09/13 20:47:59 INFO mapred.JobClient: map 52% reduce 0%
    08/09/13 20:48:03 INFO mapred.JobClient: map 61% reduce 0%
    08/09/13 20:48:08 INFO mapred.JobClient: map 69% reduce 0%
    08/09/13 20:48:09 INFO mapred.JobClient: map 72% reduce 0%
    08/09/13 20:48:13 INFO mapred.JobClient: map 78% reduce 0%
    08/09/13 20:48:14 INFO mapred.JobClient: map 80% reduce 0%

    ... and so on

    The last step is to check if you have output!
    You can do this by doing a

    $:~ hadoop-*/bin/hadoop dfs -ls output
    Found 2 items
    /user/hadoop/output/_logs <dir> 2008-09-13 19:21 rwxr-xr-x hadoop supergroup
    /user/hadoop/output/part-00000 <r 1> 2917 2008-09-13 20:10 rw-r--r-- hadoop supergroup

    The most important part is that the number next to the <r 1> should not be 0.

    To check the actual contents of the output do a

    $:~ hadoop-*/bin/hadoop dfs -cat output/*

    Alternatively, you can copy it to local disk and check/modify it:

    $:~ hadoop-*/bin/hadoop dfs -copyFromLocal output myoutput
    $:~ cat myoutput/*


    When you're done running jobs on the dfs, run the stop-all.sh command.

    $:~ hadoop-*/bin/stop-all.sh


    And that concludes our tutorial! Hope someone finds the helpful!
  • Saturday, August 16, 2008

    Good times

    Kevin came to visit me in TX, and we've had a wonderful week together (well, aside from me being sick last wednesday and thursday, but anyway). There was much to celebrate; I had a new apartment, he just finished his M.S., and (as I just found out today), I got my first paper accepted to a conference! Hooray!

    Instead of going out, we decided to make dinner. What was on the menu? Brownies, Shake n' Bake chicken pieces, and a custom recipe for garlic mashed potatoes. Kevin and I both -love- garlic, so, for all you garlic enthusiasts out there, here is our recipe!

    -----------

    KEVIN'S & SUZANNE'S GARLIC LOVERS MASHED POTATOES

    4 large red potatoes
    1 can of garlic chicken broth
    3-4 cloves of garlic
    1 T butter
    1 - 2 T milk

    1. Cut potatoes into reasonably sized chunks and toss them into a medium saucepan.

    2. Pour in 1 can of garlic chicken broth. Add additional water until tops of potatoes are just covered with water. Mince 2 cloves of garlic with garlic press and add in.

    3. Bring potatoes to a boil; when it boils, let simmer for an additional 6-8 minutes; Potatoes are done when you can stick a fork into the individual pieces

    4. Drain away the majority of the broth, letting only a little bit remain. Mash with potato masher. Add milk, a little bit at a time, to get desired consistency. Mash with one tablespoon of butter.

    5. Cut up the last one to two cloves of garlic (depending on your taste!) into coarse slices, each about 2-3 mm thick. Throw into mashed potatoes and mash till properly assimilated. Salt and pepper to taste.



    Ok, back to Olympics and perhaps some more Cowboy Bebop! Nikhil, I finally understand why you wanted a Corgi :-)

    EDIT
    --------

    Man, those potatoes are powerful. Other suggested titles:

    Kevin's and Suzanne's Hit-Me-in-the-Face-with-Garlic Mashed Potatoes
    Kevin's and Suzanne's Garlic-Explosion Mashed Potatoes
    Kevin's and Suzanne's Pistol-Whipped-with-Garlic Mashed Potatoes
    Kevin's and Suzanne's I-don't-mind-knocking-people-unconscious-for-a-week Mashed Potatoes
    Kevin's and Suzanne's Vampire-Slayer Mashed Potatoes
    Kevin's and Suzanne's Garlic Armageddon Mashed Potatoes
    Kevin's and Suzanne's Death-By-Garlic Mashed Potatoes

    More Mashed Potato title suggestions welcome

    Sunday, July 06, 2008

    40+ kids that can draw better than me

    Friend Nikhil linked me to Doodle 4 Google. I was absolutely stunned at the artistic ability of these kids. You can check out the 40 Regional Winners, or the US Map of Finalists. I would love to hear people's thoughts :-)

    -Suzanne

    Friday, July 04, 2008

    Eliza

    "How my Program Passed the Turing Test" - Mark Humphyrs

    Read and enjoy.

    Somewhat drunk right now, but fairly happy and pleased.

    Hope everyone has a happy 4th of July.

    Wednesday, July 02, 2008

    Sick Day




    My stomach is acting up again... so I'm spending my spare time learning Python and gluing captions to cats.

    Summer's going slow but ok so far. Miss everyone at RPI...

    Tuesday, June 24, 2008

    Installing Sun's Java JRE and JDK on Fedora 5

    So for the last couple of days, I've been trying to get Hadoop working on my computer at work, which runs Fedora. Monday (yesterday), I found out that the source of my troubles had to do with the fact that Hadoop requires Sun's Java. Fedora, by default, comes with Gnu's Java.This would be, in most cases, ok, as Gnu's Java mimics Sun's Java and has most of the same functionality. However, certain projects (like Hadoop) requires Sun's Java, so it's a good idea to try and have both on the system.

    Installing Java on Fedora is a pain. No one guide online helped me successfully install the damn thing onto my work machine, which led to quite a bit of frustration. However, since things seem to be working correctly (finally!), I thought I would share my procedure with all of you who may have to do this at some point. Even though the majority of my code posts concern Ubuntu, I think this is a worthwhile diversion.

    To install Java properly (for the coders out there), there are two steps: i.) The Java Run Time Environment (JRE) and ii.) The Java Developer's Kit (JDK). Most of the instructions I'm putting here is an amalgamation of instructions found on two sites, both of which had some instructions that worked for me, and others that did not.

    ----

    Before we begin, Make sure your System is Completely Up to Date This is CRUCIAL. All the following instructions probably will not work properly if your Fedora install is not properly updated. One thing that slowed me down quite a bit is that I initially did not do this step.

    INSTALLING THE JAVA RUNTIME ENVIRONMENT (JRE)

    1. Enter root mode by typing in
    su

    While I imagine it is possible to install this stuff locally, having root access is ideal.

    2. Make sure that your system has rpmdevtools and jpackage-utils. You can install these by typing:

    yum -y install rpmdevtools
    yum -y install jpackage-utils

    3. Next grab the jpackage key and set up the jpackage repositories for yum

    rpm --import http://jpackage.org/jpackage.asc
    cd /etc/yum.repos.d
    wget http://www.jpackage.org/jpackage17.repo

    If SUCCESS, go to step 4.
    If FAIL, check to make sure that you have wget installed. Install it using:
    yum -install wget

    4. Get the latest Jpackage java-x-sun-xjpp.nosrc.rpm package from the non-free branch at jpackage. The package I used was this one.
    Install this file by typing something akin to the following (yours may be different based on version numbers)
    ./java-1.6.0-sun-1.6.0.6-1jpp.nosrc.rpm

    If this works, you will see it installed under /usr/src/redhat/

    If SUCCESS: go to step 5.
    If FAIL: Check permissions. You may have to change permissions as follows:

    chmod 755 java-1.6.0-sun-1.6.0.6-1jpp.nosrc.rpm

    If SUCCESS, go to step 5.
    If FAIL, I cannot help you. Sorry!

    5. Get the latest binary. I got mine from here. It should look something like jre-6u6-linux-i586.bin.
    Move this file to the SOURCES directory. So something like,

    mv jre-6u6-linux-i586.bin /usr/src/redhat/SOURCES/.

    Next, rebuild the java rpm using

    rpmbuild -ba java-1.x.0-sun.spec

    If SUCCESS, you should now be able to see the binary in /usr/src/redhat/RPMS/i586/ . Go to step 6.
    If FAIL, make sure that you are running this command in the /usr/src/redhat/SPEC/ directory. If you do not have a spec file, you did something wrong in the previous steps, or I can't help you.

    6. Install the binaries in /usr/src/redhat/RPMS/i586/ by using the following command:
    yum --nogpgcheck localinstall java*.rpm

    If SUCCESS, go on to part 2.
    IF FAIL, the most likely reason for failure is some sort of message like --nogpgcheck not a valid command for yum. Omit it then. Alternatively, you can manually go into the /usr/src/redhat/RPMS/i586/ folder and install all the rpms by double clicking on them.


    INSTALLING JDK

    1. Make sure you have the following packages: rpm-build and fedora-rpmdevetools. If not, install using
    yum install fedora-rpmdevtools
    yum install rpm-build

    2. Grab the sun jdk rpm file. I got mine from here. The file should look something like: jdk-6u6-linux-i586-rpm.bin

    Run the rpm by doing the following:
    chmod 755 jdk-6u2-linux-i586-rpm.bin
    ./jdk-6u2-linux-i586-rpm.bin

    If SUCCESS, you should be able to see a whole bunch of RPMs located in /usr/java/jdk1.6.0_02 and a new directory in /opt/sun . Move to step 3

    3. Next, install the RPM. You'll also need a compat file from Jpackages. I got mine here. It should look something like java-1.6.0-sun-compat-1.6.0.06-1jpp.src.rpm.

    Do the following:
    yum --enablerepo=jpackage-generic-nonfree install java-1.6.0-sun-compat-1.6.0.06-1jpp.src.rpm

    Say yes when prompted. This should complete installation of JDK.

    4. Keep in mind that the default Java may still not be Sun Java. To fix this, we need one last step:

    /usr/sbin/alternatives --config java

    This should show two options: GIJ (the older java) and Sun's Java. On my machine it was option 2. Enter the number that corresponds to the Sun Java install and press enter.

    Now, if you type in:

    java -version

    It should print out something like:

    java version "1.6.0_06"
    Java(TM) SE Runtime Environment (build 1.6.0_06)
    Java HotSpot(TM) Client VM (build 1.6.0_06, mixed mode, sharing)


    You're done!

    Additionally, you'll probably want to install the mozilla plugin. Do something akin to the following
    ln -s /usr/lib/jvm/java-1.6.0-sun-1.6.0/jre/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins/libjavaplugin_oji.so


    And that should fix everything. Hope this post will eventually be helpful to someone. Comments welcome.

    Saturday, June 14, 2008

    Entracte - Texas

    "Stuck on you
    I've got this feeling deep down in my soul that I just can't lose
    Guess I'm on my way
    Needed a friend
    And the way I feel about you I guess I'll be with you 'til the end
    Guess I'm on my way
    Mighty glad you stayed"


    "Oh, the North country winters keep a gettin' me now
    Lost my money playin' poker so I had to up and leave
    But I ain't a turnin' back
    To livin' that old life no more..."

    "Suddenly I see this is what I wanna be
    Suddely I see why the hell it means so much to me
    'Cause this is what I wanna be
    Suddenly I see why the hell it means so much to me..."



    Finally in Texas. A few more days before I officially start my position as an R.A. Also the first day of a new long distance relationship. What will the future hold? Who knows? All I know is, all I can do is look forward...and some how, it will all turn out just fine.

    Saturday, May 10, 2008

    Done.



    Now I'm just exhausted.

    Got my M.S. this semester. Now to rest, celebrate, and re-organize my life. Kevin and I escaped to Rochester for this weekend, so this should be good.

    Sunday, April 06, 2008

    Getting encrypted DVDs to work in Ubuntu - Gutsy

    Whoa, this took a freakin' long time to fix! Here was my problem: VLC wouldn't play encrypted DVDs, which was really aggravating. After installing a whole slew of codecs, I think it's only right to finally share what ended up working. First, I should mention it -still- doesn't work in VLC. In fact, the only way I got it to work was to use a different player altogether, gxine. So here is the work around, which I got from this link.

    1.) Add the following lines to /etc/apt/sources.list
    ##Medibuntu-Ubuntu 7.10 gutsy gibbon
    ##Please report any bug on https://launchpad.net/products/medibuntu/+bugs
    deb http://medibuntu.sos-sts.com/repo/ gutsy free non-free
    deb-src http://medibuntu.sos-sts.com/repo/ gutsy free non-free

    2.) from the command line, run: sudo apt-get upgrade

    3.) sudo apt-get install libdvdread3 libdvdcss2 gstreamer xine-ui libxine1-ffmpeg gxine regionset

    4.) Make sure your region is set using regionset (set region to 1) -- I suspect most people won't need to do this

    5.) from the command line, run: sudo /usr/share/doc/libdvdread3/examples/install-css.sh, or simply: sudo /usr/share/doc/libdvdread3/install-css.sh

    6.) Go to System --> Preferences --> Removable Drives and Media

    7.) Click on the "multimedia" tab. Under "Video DVD discs" Click "play video DVD discs when inserted" and enter the following for the command:
    gxine -S dvd:/

    Pop in a DVD and it should work!

    ---------------------------------
    UPDATE: ok... so it -does- also work with VLC now. Win?
    UPDATE 2: It works with xine as well :-) Sick. Ok, I declare update this from "bandaid"ed, to fixed!

    Thursday, March 06, 2008

    It's Official

    Friends,

    I have just been notified that I've been accepted into the PhD Program in Computer Science at Texas A&M University. I will be working with one of the most stellar advisors that a girl could ever hope for in a field that is cutting edge.

    So I guess this is it then... I am leaving RPI after this semester (hopefully with a Masters).

    It's official.

    -Suzanne

    Saturday, March 01, 2008

    New Laptop fun

    Got a new laptop: a Lenovo IdeaPad Y410 from Newegg. Excellent stats:

    14.1" Widescreen (1280 x 800)
    Intel Core 2 Duo 1.66 Ghz
    250 GB HD (5400 rpm, SDA)
    2 GB (DDR-2, 2 1-GB DIMMs)
    2 MB L2 cache
    etc..

    Only problems:
    1.) Preloaded with Windows Vista (I'm getting used to it)
    2.) Sound in Linux (Ubuntu) doesn't work

    Kevin and I spent some quality time today trying to fix 2.), and, as KT Tunstall plays successfully on my linux partition, I would say that we have success. Here are our steps:

    Using instructions found here, we did the following (to summarize)

    -sudo apt-get install po-debconf debhelper quilt alsa-base libc6-dev

    -get latest alsa driver --> unpack this
    ./configure
    make
    sudo make install

    ::eat a cookie::
    ::look at pretty kitten pictures::

    edit /etc/modprode.d/alsa-base:
    -add line:
    options snd-hda-intel index=0 model=fujitsu

    edit /etc/modules:
    -add:
    snd-hwdep
    snd-hda-intel

    Also, do the following:
    rm /lib/modules/2.6.22-14-generic/ubuntu/media/snd-hda-intel/snd-hda-intel.ko
    ln -s /lib/modules/2.6.22-14-generic/kernel/sound/pci/hda/snd-hda-intel.ko /lib/modules/2.6.22-14-generic/ubuntu/media/snd-hda-intel/snd-hda-intel.ko

    Restart, and sound works! Overall, great laptop.

    At least Wireless worked off the bat....

    Thursday, February 21, 2008

    Surreal

    Sometimes it feels like I'm moving so fast that I'm simply standing still. The days are starting to blur; I am on a train. I am in a cocoon. If I glance outside my window, an endless stream of people rush past, the days and nights rise and fall in a ridiculous cycle. It's like watching a segment of a movie on endless repeat. It's all very dream-like... is this reality? So I stare back at my computer, trying to accomplish a fraction of what I feel like I should be accomplishing, letting my fingers curl onto my keyboard. They long to type:

    When does this end?


    All I ever wanted was to feel the way you'd feel
    All I ever wanted was a chance to make it real

    I'm in love with you
    And I can pull it through

    All I ever wanted is a place out by the sun
    To watch the world go by and take each day as it comes
    All I ever wanted was a chance to catch my breath
    To see the world go by and lay my ghosts to rest

    Give me a chance to catch my breath
    So I can lay my ghosts to rest

    So ask me tomorrow what I thought of yesterday
    There's so many things that I could not explain

    I'm in love with you
    And I can pull it through

    All I ever wanted is a place out by the sun
    To watch the world go by and take each day as it comes
    All I ever wanted was a chance to catch my breath
    To see the world go by and lay my ghosts to rest

    Give me a chance to catch my breath
    So I can lay my ghosts to rest

    All I ever wanted was a chance to catch my breath
    All that I ever wanted was to lay my ghosts to rest

    All I ever wanted was a chance to catch my breath
    All I ever wanted was to lay my ghosts to rest

    - Ghosts, Dirty Vegas

    Thursday, February 14, 2008

    Happy Valentine's Day!

    Remembered this video today... what a great song...

    Sunday, January 27, 2008

    Why do people become computer science majors?

    Thought of this comic today. I miss Project Y.

    Life is heading into transition again soon. Will talk about more later.

    -Suzanne

    Tuesday, December 25, 2007

    Merry Christmas

    Merry Christmas, everyone.

    I'll probably have an update post today, but we just got back from midnight mass (and yes, it's almost 2 am!), and I absolutely had to post the most astounding revelation I had while I was sitting in mass tonight:

    Love is Empathy.

    Isn't that beautiful? Love is Empathy. It all makes sense now.

    Have a wonderful night (or morning). Dad and I are waiting for Mom to go to bed so we can surprise her by setting up her gift for her so it's ready in the morning. For safety's sake, I'll tell you what it is later today ;-)

    G'night!

    Monday, December 24, 2007

    Top 10 Viral Videos of 2007

    So time has this interesting piece listing (and providing links to) the Top 10 Viral Videos of 2007. It's really worth taking a look at. I was shocked to see actually 5 videos that I hadn't seen before.. hmm; seems like grad school is actually affecting how well I waste my time on the internets. How many did you fail to recognize?

    Of note is "Iran so Far", another SNL short starring one of the Narnia cupcake guys. Hilarious, but I can see why the president of Iran hates us so much.

    Other Time top 10 lists you may be interested in are:
    Top 10 T-Shirt Worthy Slogans
    This video from the Top 10 Animal Stories (nothing else is really worth it)

    It's the afternoon of Christmas Eve, and I'm busy programming. I'm actually taking a break right now; I may get back to the coding.. but then again I may not. It -is- Christmas Eve, after all. Status report: if I can get one more component fixed, life will be good, and maybe I'll switch gears for a while. If not... meh. Not sure if I care right now. I do need to rest; that is of paramount importance.

    Anyways, wishing you and yours a Merry Christmas eve.

    Friday, November 30, 2007

    Graph Theory + Kitties

    My favorite kind of combination.

    Check out Chat Noir, a really game that is applicable to some problem within Graph Theory, I dare-say something associated with pursuit and evasion, though I am hesitant to use that label here.