Showing posts with label fix. Show all posts
Showing posts with label fix. Show all posts

Wednesday, September 10, 2008

Running Hadoop on OS X 10.5 (64-bit) single node cluster

These instructions are for installing and running Hadoop on a single node cluster. This tutorial follows the same format and largely the same steps of the incredibly thorough and well-written tutorial by Michael Noll about Ubuntu cluster setup. This is pretty much his procedure with changes made for OS X users. I also added other things that I was able to piece together after looking up things from the Hadoop Quickstart and the forums/archives.

Step 1: Creating a designated hadoop user on your system


This isn't -entirely- necessary, but it's a good idea for security reasons.
To add a user, go to:
System Preferences > Accounts
Click the "+" button near the bottom of the account list. You may need to unlock this ability by hitting the lock icon at the bottom corner and entering the admin username and password.
When the New account window comes out enter a name, as short name and a password. I entered the following:
Name: hadoop
Short name: Hadoop
Password: MyPassword (well you get the idea)

Once you are done, hit "create account".
Now, log in as the hadoop user. You are ready to set up everything!

Step 2: Install/Configure Preliminary Software



Before installing Hadoop, there are several things that you need make sure you have on your system.

1. Java, and the latest version of the JDK
2. SSH

Because OS X is awesome, you actually don't have to install these things. However, you will have to enable and update what you have. Let's start with Java:

Updating Java
Open up the Terminal application. If it's not already on your dock, you can access it through
Applications > Utilities > Terminal
Next check to see the version of Java that's currently available on the system:
$:~ java -version
java version "1.5.0_13"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05-237)
Java HotSpot(TM) Client VM (build 1.5.0_13-119, mixed mode, sharing)

You may want to update this to Java Sun 6, which is available as an update for OS X 10.5 (Update 1). It's currently only available for 64-bit machines though. You can download it here.

After you download and install the update, you are going to need to configure Java on your system so the default points to this new update.
Go to Applications > Utilities > Java > Java Preferences
Under "Java Version" hit the radio button next to "Java SE 6"
Down by "Java Application Runtime Settings" change the order so Java SE 6 (64 bit) is first, followed by Java SE 5 (64 bit) and so on.
Hit "Save" and close this window.

Now, when you go to the terminal, and type in "java -version" you should get the following:
$:~ java -version
java version "1.6.0_05"
Java(TM) SE Runtime Environment (build 1.6.0_05-b13-120)
Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_05-b13-52, mixed mode)

and for "javac -version":
$:~ javac -version
javac 1.6.0_05


Onto ssh!

SSH: Setting up Remote Desktop and enabling self-login
SSH also comes installed on your Mac. However, you need to enable access to your own machine (so hadoop doesn't ask you for a password at inconvenient times).
To do this, go to System Preferences > Sharing (under Internet & Network)
Under the list of services, check "Remote Login". For extra security, you can hit the radio button for "Only these Users" and select hadoop

Now, we're going to configure things so we can log into localhost without being asked for a password. Type the following into the terminal:

$:~ ssh-keygen -t rsa -P ""
$:~ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Now try:
$:~ ssh localhost

You should be able to log in without a problem.

You are now ready to install Hadoop. Let's go to step 3!

Step 3: Downloading and Installing Hadoop



So this actually involves several smaller steps:

1. Downloading and Unpacking Hadoop
2. Configuring Hadoop
3. Formatting and Testing Hadoop

After we finish these, you should be ready to go! So let's get started:

Downloading and Unpacking Hadoop
  • Download Hadoop. Make sure you download the latest version (as of this blogpost, 0.17.2 and 0.18.0 are the latest versions). We call our generic version of hadoop hadoop-* in this tutorial.

  • Unpack the hadoop-*.tar.gz in the directory of your choice. I placed mine in /Users/hadoop. You may also want to set ownership permissions for the directory:

    $:~ tar -xzvf hadoop-*.tar.gz
    $:~ chown -R hadoop hadoop-*

    Configuring Hadoop

    There are two files that we want to modify when we configure Hadoop. The first is conf/hadoop-env.sh . Open this in nano or your favorite text editor and do the following:

    - uncomment the export JAVA_HOME line and set it to /Library/Java/Home

    - uncomment the export HADOOP_HEAPSIZE line and keep it at 2000

    You may want to change other settings as well, but I chose to leave the rest of hadoop-env.sh the same. Here is an idea of what part of mine looks like:

    # Set Hadoop-specific environment variables here.

    # The only required environment variable is JAVA_HOME. All others are
    # optional. When running a distributed configuration it is best to
    # set JAVA_HOME in this file, so that it is correctly defined on
    # remote nodes.

    # The java implementation to use. Required.
    export JAVA_HOME=/Library/Java/Home

    # Extra Java CLASSPATH elements. Optional.
    # export HADOOP_CLASSPATH=

    # The maximum amount of heap to use, in MB. Default is 1000.
    export HADOOP_HEAPSIZE=2000


    The next part that we need to set up is hadoop-site.xml. The most important parts to set here are hadoop.tmp.dir (which should be set to the directory of your choice) and to add mapred.tasktracker.maximum property to the file. This will effectively set the maximum number of tasks that can simulataneously run by a task tracker. You should also set dfs.replication 's value to one.

    Below is a sample hadoop-site.xml file:

    ---------------
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/Users/hadoop/hadoop-0.17.2.1/hadoop-${user.name}</value>
    <description>A base for other temporary directories.</description>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    <description>The name of the default file system. A URI whose
    scheme and authority determine the FileSystem implementation. The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class. The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
    <description>The host and port that the MapReduce job tracker runs
    at. If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
    </property>

    <property>
    <name>mapred.tasktracker.tasks.maximum</name>
    <value>8</value>
    <description>The maximum number of tasks that will be run simultaneously by a
    a task tracker
    </description>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>Default block replication.
    The actual number of replications can be specified when the file is created.
    The default is used if replication is not specified in create time.
    </description>
    </property>

    </configuration>
    -----------

    Now to our last step!

    Formatting and Testing Hadoop

    Our last step involves formatting the namenode and testing our system.

    $:~ hadoop-*/bin/hadoop namenode -format

    This will give you output along the lines of

    $:~ hadoop-*/bin/hadoop namenode -format
    08/09/14 21:22:14 INFO dfs.NameNode: STARTUP_MSG:
    /***********************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = loteria/127.0.0.1
    STARTUP_MSG: args = [-format]
    ***********************************************************/
    08/09/14 21:22:14 INFO dfs.Storage: Storage directory [...] has been successfully formatted.
    09/09/14 21:22:14 INFO dfs.NameNode: SHUTDOWN_MSG:
    /***********************************************************
    SHUTDOWN_MSG: Shutting down NameNode at loteria/127.0.0.1
    ***********************************************************/


    Once this is done, we are ready to test our program.

    First, start up the DFS. This will start up a TaskTracker, JobTracker, and DataNode on the machine.

    $:~ hadoop-*/bin/start-all.sh

    As input for our test, we are going to copy the conf folder up to our DFS.

    $:~ hadoop-*/bin/hadoop dfs -copyFromLocal hadoop-*/conf input

    You can check to see if this actually worked by doing an ls on the dfs as follows:
    $:~ hadoop-*/bin/hadoop dfs -ls
    Found 1 item
    /user/hadoop/input %ltdir> 2008-09-11 13:33 rwxr-xr-x hadoop supergroup

    Now, we need to compile the code. cd into the hadoop-*/ directory and do:
    $:~ ant examples

    This will compile the example programs found in hadoop-*/src/examples

    Now, we will run the example distributed grep program on the conf program as input.

    $:~ hadoop-*/bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

    If this works, you'll see something like this pop up on your screen:
    08/09/13 20:47:24 INFO mapred.FileInputFormat: Total input paths to process : 1
    08/09/13 20:47:24 INFO mapred.JobClient: Running job: job_200809111608_0033
    08/09/13 20:47:25 INFO mapred.JobClient: map 0% reduce 0%
    08/09/13 20:47:38 INFO mapred.JobClient: map 13% reduce 0%
    08/09/13 20:47:39 INFO mapred.JobClient: map 16% reduce 0%
    08/09/13 20:47:43 INFO mapred.JobClient: map 22% reduce 0%
    08/09/13 20:47:44 INFO mapred.JobClient: map 24% reduce 0%
    08/09/13 20:47:48 INFO mapred.JobClient: map 33% reduce 0%
    08/09/13 20:47:53 INFO mapred.JobClient: map 41% reduce 0%
    08/09/13 20:47:54 INFO mapred.JobClient: map 44% reduce 0%
    08/09/13 20:47:58 INFO mapred.JobClient: map 50% reduce 0%
    08/09/13 20:47:59 INFO mapred.JobClient: map 52% reduce 0%
    08/09/13 20:48:03 INFO mapred.JobClient: map 61% reduce 0%
    08/09/13 20:48:08 INFO mapred.JobClient: map 69% reduce 0%
    08/09/13 20:48:09 INFO mapred.JobClient: map 72% reduce 0%
    08/09/13 20:48:13 INFO mapred.JobClient: map 78% reduce 0%
    08/09/13 20:48:14 INFO mapred.JobClient: map 80% reduce 0%

    ... and so on

    The last step is to check if you have output!
    You can do this by doing a

    $:~ hadoop-*/bin/hadoop dfs -ls output
    Found 2 items
    /user/hadoop/output/_logs <dir> 2008-09-13 19:21 rwxr-xr-x hadoop supergroup
    /user/hadoop/output/part-00000 <r 1> 2917 2008-09-13 20:10 rw-r--r-- hadoop supergroup

    The most important part is that the number next to the <r 1> should not be 0.

    To check the actual contents of the output do a

    $:~ hadoop-*/bin/hadoop dfs -cat output/*

    Alternatively, you can copy it to local disk and check/modify it:

    $:~ hadoop-*/bin/hadoop dfs -copyFromLocal output myoutput
    $:~ cat myoutput/*


    When you're done running jobs on the dfs, run the stop-all.sh command.

    $:~ hadoop-*/bin/stop-all.sh


    And that concludes our tutorial! Hope someone finds the helpful!
  • Tuesday, June 24, 2008

    Installing Sun's Java JRE and JDK on Fedora 5

    So for the last couple of days, I've been trying to get Hadoop working on my computer at work, which runs Fedora. Monday (yesterday), I found out that the source of my troubles had to do with the fact that Hadoop requires Sun's Java. Fedora, by default, comes with Gnu's Java.This would be, in most cases, ok, as Gnu's Java mimics Sun's Java and has most of the same functionality. However, certain projects (like Hadoop) requires Sun's Java, so it's a good idea to try and have both on the system.

    Installing Java on Fedora is a pain. No one guide online helped me successfully install the damn thing onto my work machine, which led to quite a bit of frustration. However, since things seem to be working correctly (finally!), I thought I would share my procedure with all of you who may have to do this at some point. Even though the majority of my code posts concern Ubuntu, I think this is a worthwhile diversion.

    To install Java properly (for the coders out there), there are two steps: i.) The Java Run Time Environment (JRE) and ii.) The Java Developer's Kit (JDK). Most of the instructions I'm putting here is an amalgamation of instructions found on two sites, both of which had some instructions that worked for me, and others that did not.

    ----

    Before we begin, Make sure your System is Completely Up to Date This is CRUCIAL. All the following instructions probably will not work properly if your Fedora install is not properly updated. One thing that slowed me down quite a bit is that I initially did not do this step.

    INSTALLING THE JAVA RUNTIME ENVIRONMENT (JRE)

    1. Enter root mode by typing in
    su

    While I imagine it is possible to install this stuff locally, having root access is ideal.

    2. Make sure that your system has rpmdevtools and jpackage-utils. You can install these by typing:

    yum -y install rpmdevtools
    yum -y install jpackage-utils

    3. Next grab the jpackage key and set up the jpackage repositories for yum

    rpm --import http://jpackage.org/jpackage.asc
    cd /etc/yum.repos.d
    wget http://www.jpackage.org/jpackage17.repo

    If SUCCESS, go to step 4.
    If FAIL, check to make sure that you have wget installed. Install it using:
    yum -install wget

    4. Get the latest Jpackage java-x-sun-xjpp.nosrc.rpm package from the non-free branch at jpackage. The package I used was this one.
    Install this file by typing something akin to the following (yours may be different based on version numbers)
    ./java-1.6.0-sun-1.6.0.6-1jpp.nosrc.rpm

    If this works, you will see it installed under /usr/src/redhat/

    If SUCCESS: go to step 5.
    If FAIL: Check permissions. You may have to change permissions as follows:

    chmod 755 java-1.6.0-sun-1.6.0.6-1jpp.nosrc.rpm

    If SUCCESS, go to step 5.
    If FAIL, I cannot help you. Sorry!

    5. Get the latest binary. I got mine from here. It should look something like jre-6u6-linux-i586.bin.
    Move this file to the SOURCES directory. So something like,

    mv jre-6u6-linux-i586.bin /usr/src/redhat/SOURCES/.

    Next, rebuild the java rpm using

    rpmbuild -ba java-1.x.0-sun.spec

    If SUCCESS, you should now be able to see the binary in /usr/src/redhat/RPMS/i586/ . Go to step 6.
    If FAIL, make sure that you are running this command in the /usr/src/redhat/SPEC/ directory. If you do not have a spec file, you did something wrong in the previous steps, or I can't help you.

    6. Install the binaries in /usr/src/redhat/RPMS/i586/ by using the following command:
    yum --nogpgcheck localinstall java*.rpm

    If SUCCESS, go on to part 2.
    IF FAIL, the most likely reason for failure is some sort of message like --nogpgcheck not a valid command for yum. Omit it then. Alternatively, you can manually go into the /usr/src/redhat/RPMS/i586/ folder and install all the rpms by double clicking on them.


    INSTALLING JDK

    1. Make sure you have the following packages: rpm-build and fedora-rpmdevetools. If not, install using
    yum install fedora-rpmdevtools
    yum install rpm-build

    2. Grab the sun jdk rpm file. I got mine from here. The file should look something like: jdk-6u6-linux-i586-rpm.bin

    Run the rpm by doing the following:
    chmod 755 jdk-6u2-linux-i586-rpm.bin
    ./jdk-6u2-linux-i586-rpm.bin

    If SUCCESS, you should be able to see a whole bunch of RPMs located in /usr/java/jdk1.6.0_02 and a new directory in /opt/sun . Move to step 3

    3. Next, install the RPM. You'll also need a compat file from Jpackages. I got mine here. It should look something like java-1.6.0-sun-compat-1.6.0.06-1jpp.src.rpm.

    Do the following:
    yum --enablerepo=jpackage-generic-nonfree install java-1.6.0-sun-compat-1.6.0.06-1jpp.src.rpm

    Say yes when prompted. This should complete installation of JDK.

    4. Keep in mind that the default Java may still not be Sun Java. To fix this, we need one last step:

    /usr/sbin/alternatives --config java

    This should show two options: GIJ (the older java) and Sun's Java. On my machine it was option 2. Enter the number that corresponds to the Sun Java install and press enter.

    Now, if you type in:

    java -version

    It should print out something like:

    java version "1.6.0_06"
    Java(TM) SE Runtime Environment (build 1.6.0_06)
    Java HotSpot(TM) Client VM (build 1.6.0_06, mixed mode, sharing)


    You're done!

    Additionally, you'll probably want to install the mozilla plugin. Do something akin to the following
    ln -s /usr/lib/jvm/java-1.6.0-sun-1.6.0/jre/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins/libjavaplugin_oji.so


    And that should fix everything. Hope this post will eventually be helpful to someone. Comments welcome.

    Sunday, April 06, 2008

    Getting encrypted DVDs to work in Ubuntu - Gutsy

    Whoa, this took a freakin' long time to fix! Here was my problem: VLC wouldn't play encrypted DVDs, which was really aggravating. After installing a whole slew of codecs, I think it's only right to finally share what ended up working. First, I should mention it -still- doesn't work in VLC. In fact, the only way I got it to work was to use a different player altogether, gxine. So here is the work around, which I got from this link.

    1.) Add the following lines to /etc/apt/sources.list
    ##Medibuntu-Ubuntu 7.10 gutsy gibbon
    ##Please report any bug on https://launchpad.net/products/medibuntu/+bugs
    deb http://medibuntu.sos-sts.com/repo/ gutsy free non-free
    deb-src http://medibuntu.sos-sts.com/repo/ gutsy free non-free

    2.) from the command line, run: sudo apt-get upgrade

    3.) sudo apt-get install libdvdread3 libdvdcss2 gstreamer xine-ui libxine1-ffmpeg gxine regionset

    4.) Make sure your region is set using regionset (set region to 1) -- I suspect most people won't need to do this

    5.) from the command line, run: sudo /usr/share/doc/libdvdread3/examples/install-css.sh, or simply: sudo /usr/share/doc/libdvdread3/install-css.sh

    6.) Go to System --> Preferences --> Removable Drives and Media

    7.) Click on the "multimedia" tab. Under "Video DVD discs" Click "play video DVD discs when inserted" and enter the following for the command:
    gxine -S dvd:/

    Pop in a DVD and it should work!

    ---------------------------------
    UPDATE: ok... so it -does- also work with VLC now. Win?
    UPDATE 2: It works with xine as well :-) Sick. Ok, I declare update this from "bandaid"ed, to fixed!

    Saturday, November 10, 2007

    Ubuntu 7.10 - Gutsy Gibbon (VPN running)

    Back in linux, I follow the link and download the file. After unzipping the tar.bz2 file, I cd in and run ./vpn_install. Magic happens. Next, to start the thing. In the future, I hopefully don't have to do this, since it should be done automatically:

    sudo /etc/init.d/vpnclient_init start

    Ok, next, I need profiles. Before, when love was on the side of vpnc, I would have to enter the gateway info, and two passwords. This was doable, and ok. Now I need profiles. With a touch of chagrin, I realize that this probably the program that the help desk wanted me to run all along, though the tar file I downloaded was different. Going back to the RPI helpdesk site, I download the necessary .pcf files and put them in /etc/opt/cisco-vpnclient/Profiles/

    The following command is then used:
    sudo vpnclient connect RPI_External_VPN

    Magic happens: user names and passwords are entered, information is exchanged. Sparkles and rainbows shoot out of my terminal. And suddenly..!

    Negotiating security policies.
    Securing communication channel.

    Your VPN connection is secure.


    To check, I try to connect to my server. No problems! Hooray! VPN now works!

    Ugh. It's been fun, I think. I haven't checked RPI Wireless yet, but something tells me it will work (before it didn't). So overall, a productive and good use of time.

    I -am- hungry. Maybe food is in order? That's a good idea. Food, and a hot shower.

    Till next time...
    -Suzanne

    Ubuntu 7.10 - Gutsy Gibbon (Fixing VPN)

    Now in Windows, I test out my Cisco external VPN connection. After successfully being able to connect to my server (which is now behind the firewall -- another rant), I knew there wasn't anything wrong with External VPN. So one of two things must have happened.

    By some hidden magic, External VPN no longer supports vpnc, or by some other hidden magic, vpnc changed somehow from Ubuntu 7.04, to 7.10, in an unproductive manner for me.

    I head over the help-desk site and see the package that they have for linux. I shudder I -really- don't want to deal with their shit if I can get things working on my own. Back to ubuntu forums. Sure enough, other people seemed to be having a similar problem.

    I now have the following instructions:

    You can find the client here:
    http://linux-support.hiwi.rz.uni-kon...1.0640.tar.bz2
    Extract the file. Make sure, that you have installed the kernel headers and then run
    sudo ./vpn_install

    Link

    h'okay, then, let's see what we get. Reboot into linux.

    Ubuntu 7.10 - Gutsy Gibbon

    So I developed a nasty head cold last night, and as of this morning, am still feeling pretty terrible. I think among the things necessary for a cure are food and fluids, neither of which I've had since my meager bowl of oatmeal last night.

    Because I wasn't feeling well, I played on my DS, but eventually got bored. So I decided to upgrade my current system, which ran Ubuntu Fiesty 7.4 to Ubuntu Gutsy 7.10. Flipped open system upgrade, and told it to upgrade to 7.10.

    I immediately began having doubts. Usually, when a new OS comes out, there are bugs here and there, and I winced remembering my wireless adventure dealing with my network stuff. Things were working perfectly now, after all.. why change things?

    To test how well the upgrade will actually upgrade my system, rather than downgrade it, I authorized the go ahead.

    The upgrade was relatively painless; it took about an hour to download the upgrades, and another hour to install all the updates, which was pretty reasonable, especially since I don't have the best of wireless connections here at the apartment. So, I restarted the system to see what it all looked like.

    Things were ... mostly the same. Gaim had disappeared, since now Pidgin is exclusively supported. I added the Pidgin Icon to the panel, rearranged it some, and my panel looked as good as normal, save for this weird purple pigeon icon being where my simple, yet elegant yellow man icon used to be. Ah well, I'll get over it. The biggest change was that wireless was no longer working. At all.

    SHIT, I thought. This could be for several reasons. 1.) Something different is going on that it was in Fiesty 2.) Something different is going on and it's going on worse because I TOLD ubuntu during the upgrade process to keep the modified blacklist file. SHIT, I thought, shit, shit, shit shit.. Worst of all, at that point in time, I had forgotten that the blacklist file was called "blacklist", so I didn't know where to look.

    Rebooted into windows. Came to my blog, saw the notes I made last time about blacklisting goodness. Went back into linux. Located and went into this file:
    /etc/modeprob.d/blacklist

    After looking at the tail of the list, sure enough, my original changes were there. shit, I thought. So it' s no longer hostap that's the problem... wait, but what if hostap has control, and orinoco is the trouble? A few quick changes to the blacklist file yielded the tail of blacklist to look like this:

    #buggy network-manager causes ornico to fight with it for wireless card
    #blacklist hostap
    #blacklist hostap_pci
    blacklist hermes
    blacklist p80211
    blacklist prism2_pci
    blacklist orinoco
    blacklist orinoco_pci

    With my fingers crossed, I restarted the system. Ubuntu came back up, and lo and behold! Wireless was working! Hooray! Scanning is re-enabled, and things looked like it was full of win. I was pleased to see that vpnc was part of the default install, and that all my previous settings were saved. Let's see if I can connect to the external VPN. A button click and two passwords yielded:

    VPN Connect Failure

    Could not start the VPN connection 'External VPN' due to a connection error.

    The VPN login failed because the VPN program could not connect to the VPN server.


    Shit, I thought. Not good. I checked configuration file. No problems seem to be there, since indeed, it was my previous settings. I was able to connect via external VPN no problem since I last checked, which was two days ago. Could something actually be wrong with External VPN connection at RPI? or is it VPNc? Let's check.

    System reboot into Windows.

    Wednesday, August 08, 2007

    Wireless Adventure - Part IV (Wireless in Linux)

    As I write this, I'm logged into my Ubuntu partition on my laptop. We have total and complete success!

    This is great. Not only can I scan for wireless networks now, I can also connect! I'm so excited.

    Lessons learned:
    network-manager: friend
    orinoco: friend
    hostap: THE DEVIL

    Ubuntu Fiesty Fawn: Completely f-ing awesome.

    Now, to sit back and enjoy my intrawebs on my linux partition! How shall I celebrate?

    I know. Watch as many Flight of the Conchords episodes as I can!

    Then some Planet Earth! ^-^

    I am completely pleased with myself.


    Wireless Adventure: Success!

    Wireless Adventure - Part III (Blacklisting hostap)

    So blacklisting Orinoco had some positive effects. Now, I can scan for wireless networks (whoopee!) As you can imagine, this was very exciting. I thought I had finally got it to work.

    Unfortunately no.

    I'm now thinking about blacklisting hostap instead of Orinoco:
    my etc/modprobe.d/blacklist file should now read like this:

    blacklist prism2_pci
    blacklist hostap_pci
    blacklist hostap

    Let's see how this works!

    More update goodness soon.

    Wireless Adventure - Part II (Blacklisting Orinoco)

    More poking around the internets. Lucking I found this.

    To quote:
    The problem is that the hostap and orinoco kernel modules are competing for control of the card. This is mentioned as a likely problem on NetworkManager site:
    (http://live.gnome.org/NetworkManagerHardware)

    hostap: "Supports unencrypted, WEP, WPA, and WPA2 networks. Be aware that if you have both this driver and the 'orinoco' driver installed, they may fight for control of the wireless card and render it inoperable to NetworkManager. You should either disable one of these drivers, or ensure that only one driver is able to control the card."


    Thank you, Brett!

    To summarize the instructions:
    1. Check to make sure I'm dealing with the right network card: I entered the following command in bash:

    :~ lspci | grep Network
    I got:
    02:02.0 Network controller: Intersil Corporation Prism 2.5 Wavelan chipset (rev 01)

    HA! It matched. Yes! Next, for adding some blacklisting marks to
    /etc/modprobe.d/blacklist:

    blacklist orinoco
    blacklist orinoco_pci
    blacklist hermes
    blacklist p80211
    blacklist prism2_pci

    Last step, reboot!

    Completing this process took less than two minutes (even less time than it took me to install WICD), and it was relatively straightforward.

    I still don't know if it will work yet. I have to go home and see. But I have my fingers crossed! Results later. Hopefully, this should do it!

    Wireless Adventure - Part I (WICD)

    I looked for suggestions. The first one was WICD. It looked great. A lot of people seemed to be having the same problem, and WICD seemed to just fix it very quickly. Also, the Digg reviewers were treating it like the second coming of Christ: "GREAT interface! network-manager BLOWS! ALL my wirless problems were FIXED when I started using WICD! AND it improved my sex life! FIVE+++ StArS!" Sheesh.

    Installation was breeze. I was liking this already! Now came the fun part: testing it out. The interface was lot larger than network-managers. One thing I didn't like about it right away, is that it didn't automatically connect to my WIRED network, which was annoying. Furthermore it required me to create a profile for each IP (for -wired- connections). I didn't like this at all. Since I connect to the internet all over the place, the last thing I wanted was to have a bunch of "profiles" cluttering up the interface, most of which I will probably never use again. Of course, I won't delete any of them in the off-chance that I -do- use them again(and who wants to enter all that information again?). Too aggravating. The kicker was that it wasn't detecting the wireless network present.

    That would have been okay. network-manager wouldn't automatically detect wireless networks. I just had to enter them in manually. But, as it would turn out, in WICD, there is no way for me to manually enter a wireless network it didn't detect (they had a "hidden network" box, which allowed you to enter a ESSID, and no encryption key, which wasn't satisfactory.) Maybe I should have given WICD more time. Maybe I should have poked around more. Maybe I was dumb and wasn't looking at the interface closely enough. Could the solution have still been there?

    Whatever. I uninstalled WICD and reinstalled network-manager. Back to square one!

    Wireless Adventure - Introduction

    So my IBM ThinkPad T30, Phoenix, got a makeover earlier this summer. Due to my general laziness and strange attachment to Ubuntu Breezy, I didn't update to Dapper. When Feisty came out, I found out that I could no longer update my system, period. Shit.

    Among other things, this meant that I couldn't do an automatic update to Dapper, and then from there, into Feisty. So, I wiped my partition (after backing up my data) and did a clean install of Ubuntu Feisty Fawn 7.04. That, plus giving Phoenix a new fan and an additional 512 MB stick of RAM, she seemed ready to rock the world.

    I really like Feisty Fawn. It's sleeker, cleaner, and it got rid of some minor annoyances I had with Breezy. Perfect set up, I thought.

    Then it turned out that my wireless card wouldn't work.

    I was mystified. In Breezy, my wireless card worked fine. There was a bug that prevented me from scanning for wireless networks, but as long as I knew the SSID and/or password of the network I was trying to connect to, things worked fine.

    Did something go horribly wrong? Did I do something wrong? Why is my wireless card not being recognized? I rebooted into Windows. Maybe it's the network that's at fault?

    In XP (where I can scan for wireless networks fine), I located the wireless network I wanted to connect to. I entered the WEP key. Everything worked fine. So nope, nothing wrong with the card, nothing wrong with the network.

    Doing some poking around the internets revealed that this is a known bug with network-manager, which, of course, made things so much easier for me. But how to fix the bug?

    This multipart (hopefully not too long) series of posts in this blog will chronicle my wireless adventures. Will end when either I give up, or my wireless works in Ubuntu. And, I don't want to "rollback" (if that's even possible for me, considering I did a clean install of Feisty) to Dapper Drake. Let's see who wins!