Did you just install a new RHEL 6 system? If so, you might have used the familiar rhn_* commands to register the system. Unfortunately, those don’t work in RHEL 6.3. Instead, they result in a broken setup where Yum always barfs with the above error.
RedHat’s knowledgebase article claims that it’s possible to set up traditional RHN on a 6.x system, but their instructions don’t seem to work.
Instead, first you’ll need to get rid of the old ‘RHN Classic’:
rm -rf /etc/sysconfig/rhn_systemid
rm -rf /var/cache/yum/*
yum clean all
Now delete the system from the classic RHN console on the web, to free up the entitlement. After that, you can register for the new improved RHN, using a different set of commands:
Also, you don’t want to go to https://rhn.redhat.com/ any more. That’ll only show you the “Classic RHN” systems. Instead, go to https://access.redhat.com/management/ which will show you both. Yes, you have to manage your 6.x and pre-6.x systems via two different web UIs.
You can also access the new management interface via the Subscriptions menu; however, I missed finding it for ages because I didn’t realize that systems were now referred to as “consumers”. In RedHat’s crazy new terminology you’re a “distributor”, because you distribute keys to the “consumers”.
Sometimes the hardest part of improving your productivity is being able to notice that you’re doing something sub-optimal multiple times each day.
I was sitting hacking on some documents this morning when I realized that I frequently follow this usage pattern:
Locate a file, using the locate command—often a Linux configuration file of some sort.
Change directory to the directory containing that file, which is often a long way from the root directory or my current directory.
Do some stuff using the file.
Go back to what I was doing before.
A typical interaction:
w510:~/WIP 740$ locate s-pre-01.tex
w510:~/WIP 741$ cd /usr/local/context/tex/texmf-context/tex/context/base
w510:/usr/local/context/tex/texmf-context/tex/context/base 742$ cp s-pre-01.tex ~/WIP/
w510:/usr/local/context/tex/texmf-context/tex/context/base 743$ cd ~/WIP/
The problem here is that long path. I either have to type it with assistance from tab completion, or copy and paste it using the mouse.
I realized it would be really handy if I could do something like cd `locate s-pre-01.tex`
Of course, that doesn’t work for a couple of reasons, most notably that locate outputs a path to a file, not to a directory.
I checked to see if locate had an option to output only the directory name of the match, or if cd had an option to accept a filename and move to the same directory as the file. No on both counts.
Next, I checked to see if something like cdargs would solve the problem, but it seemed not.
I had the feeling a lot of other people had probably wanted to do what I wanted to do, so my next stop was Google. That turned up unhelpful monstrosities like cd "$(dirname "$(find / -type f -name ls | head -1)")"
As a Unix guy, the IBM Lotus Domino server log has always annoyed me. Since Domino is cross-platform, and Windows doesn’t have a syslog, Domino keeps its logs in a Domino database. While that makes some sense, it leads to colossal overheads; the log.nsf database is often 95% unused space. By switching my 90 days of logs to compressed syslog files, I was able to reclaim gigabytes of space.
In addition, because syslog files are just text, you can grep them, and use all kinds of open source tools to analyze them. You can use rsyslog to concentrate logs from multiple servers into a single set of logs. You can set up tasks to page you about problems. And so on.
Note that you still end up with events in log.nsf with this solution, but you can set it to a much shorter retention time. Hopefully at some point rc_domino will get cleaner support for console log processing. Ideally Domino would get official syslog support, but I won’t be holding my breath for that.
When you have a DB2 database which multiple people can update, sooner or later you are likely to end up with a deadlock. You’ll issue a simple query to update some data, and wait… and wait… and wait. I hit this problem this morning. It took a while to work out how to diagnose the problem and fix it.
In the DB2 Control Center object view, drill down through the systems, instances and databases until you find the database you’re interested in. Right click and select “Applications”. It’ll take a while, but you should get a list of all the applications running–or deadlocked.
If you click to sort the list by authorization ID, you should find it easy enough to locate your deadlocked command. It’ll be listed with status “Lock Waiting”. Click to select it, then click the “Show Lock Chains” button.
Now you should see a graphical flow chart showing your task as a box, linked to a box representing whichever task holds the lock that is preventing your task from finishing. You can then right-click the task that’s causing yours to deadlock, and choose “Force” from the pop-up menu.
You need admin privileges to do this, of course. There’s a warning dialog explaining why it’s a bad idea. But sometimes you need to do things that are bad ideas. In my case, it turned out that a colleague had 77 open sessions with locks on various tables.
Further interrogation of the guilty party revealed that they were using a graphical query tool to browse the data, rather than writing SQL. Refining an existing query made the tool helpfully lock down the output of the query, so that the additional filter clauses could be edited interactively in a window without any unexpected changes occurring to confuse things. Of course, the tool didn’t unlock the table until the entire browsing session was killed. So yet another example of the dangers of using pointy-clicky interfaces instead of actually knowing how to program.