Thursday, June 24, 2010

Another SQL*Plus thing I learned...

In the past, when you exit SQL*Plus, it would always commit - there was an implicit "commit work" issued for you right before it would disconnect your session.

It has always been that way. It doesn't have to be that way anymore as of 11g Release 2.

There is a new exitcommit setting - it defaults to ON which is the way it has always worked. But now you can set it to OFF which implies that a rollback will be issued instead of commit.

Things change over time... If you rely on SQL*Plus committing upon exit - if you have a script that relies on that, bear in mind it is no longer "a fact", it is highly probable that SQL*Plus will commit when you exit - but not a sure thing like it used to be. Yet another new thing I learned while updating the book Expert Oracle Database Architecture. I was just reminded of it today while I was going over the proofs of the chapters...

See http://download.oracle.com/docs/cd/E11882_01/server.112/e10823/ch_twelve040.htm#BABEGEGC for details on exitcommit.

Monday, June 21, 2010

I learned I don't think I like this...

I was reading around, stumbled on an article from Dr Dobb's online (I have a long history with Dr Dobb's - had it not been for them - you wouldn't be reading this!). The article described a 'feature' of Windows 7 that I have mixed feelings on. The same sort of mixed (well, not so mixed, I lean far to one side on the use of this feature) feelings I have for cursor_sharing being set to anything other than EXACT.

Here is the article
.

Ok, so why don't I like it? It seems to be a way to 'self correct' a program. It "seems" like a "good thing".

I don't like it because it won't help get the problem fixed (same with cursor_sharing :( ). In fact, it will promote *more* code being developed that suffers from heap overwrites. It lets bad developers develop bad code even faster and distribute it - thinking they are seriously good developers. That is, it leads to delusion and the bad coders getting more senior without learning from their mistakes.

In short, it instills a false sense of "hey, I'm pretty good" in developers that probably shouldn't have that sense.

It could definitely lead to some really strange issues - think about it, a program that used to crash - stops crashing - for a while - then crashes more (as the overwrite occasionally gets bigger than normal, requiring more "pad" bytes). And who knows how allowing a memory overwrite to propagate into other bits of the code will affect it. I prefer code that works right - not code that sometimes seems to work.

I'm reminded of a code review I did some 20 years ago. I asked the developer "why do you have this massive static array defined - we don't seem to use it". The answer "if you take it out, the program crashes, the compiler must have a bug". The look on my face - I wish I had a picture - it would have been priceless.

I'm not a fan of this "let's try to fix it for you and let you pretend you know what you're doing" approach to software. We rely on software way too much.

Oh well, just a 30 second thought - I just read the article and felt the need to gripe... Things like this scare me.

I have to say - I wrote something similar myself some many years ago . It was a C library I called xalloc. It replaced the malloc, calloc, realloc, free, etc functions of C. It worked by allocating (or freeing) the requested memory and adding a few bytes. It would set some bytes at the FRONT and the END of the allocated memory, set some bytes to represent the source code file and line number that allocated the memory, and return a pointer to the memory to be used by the program. Every time you called any of the xalloc functions - it would inspect the allocated memory (all of it) and CRASH the program if any of the magic bytes in front/at the end of the memory block had been changed. When the program exited - it would report on all allocated memory that wasn't freed. You could turn off the checking with an environment variable if you wanted, but it was always ready to be "on". I made everyone that worked in my team use it - it saved us countless hours (and it found the bug in the code of the person that needed to allocate that big array in a few seconds)...

My approach differed from Windows 7 in that I would prefer a program to crash and burn immediately rather than live for another second if it made such a grievous error. I'd still rather the program die than continue today...

Wednesday, June 09, 2010

What I learned about deadlines...

I learned that I am not the only one :) Seth's blog is one of the ones I read every time. They are short, to the point and almost always meaningful to me. Deadlines are the greatest motivator for me - if I do not have a deadline for something, I can pretty much guarantee you I will not finish it. I set my own little deadlines for things just to get finished. Whenever someone asks me to do something for them - write a foreword, make a recommendation, whatever - I typically say "sure and what is the drop dead date". If they know me, they'll give me a date before the true 'drop dead' just to have it in a timely fashion (because the odds they see it before then are slim to none).

Speaking of deadlines, I just finished the 2nd edition of Expert Oracle Database Architecture. Right now, this minute. Just have to dot I's and cross T's now - a few final copy edits and it'll be done. This will be the blurb on the back of the book (which you can expect to see soon)

Expert Oracle Database Architecture

Dear Reader,
I have a simple philosophy when it comes to the Oracle database: you can treat it as a black box and just stick data into it, or you can understand how it works and exploit it fully. If you choose the former, you will, at best, waste money and miss the potential of your IT environment. At worst, you will create nonscalable and incorrectly implemented applications—ones that damage your data integrity and, in short, give incorrect information. If you choose to understand exactly how the Oracle database platform should be used, then you will find that there are few information management problems that you cannot solve quickly and elegantly.

Expert Oracle Database Architecture is a book that explores and defines the Oracle database. In this book I’ve selected what I consider to be the most important Oracle architecture features, and I teach them in a proof-by-example manner, explaining not only what each feature is, but also how it works, how to implement software using it, and the common pitfalls associated with it. In this second edition, I’ve added new material reflecting the way that Oracle Database 11g Release 2 works, updated stories about implementation pitfalls, and new capabilities in the current release of the database. The number of changes between the first and second editions of this book might surprise you. Many times as I was updating the material – I myself was surprised to discover changes in the way Oracle worked that I was not yet aware of. In addition to updating the material to reflect the manner in which Oracle Database 11g Release 2 works – I’ve added an entirely new chapter on data encryption. Oracle Database 10g Release 2 added a key new capability – transparent column encryption – and Oracle Database 11g Release 1 introduced transparent tablespace encryption. This new chapter takes a look at the implementation details of these two key features as well as manual approaches to data encryption.

This book is a reflection of what I do every day. The material within covers topics and questions that I see people continually struggling with, and I cover these issues from a perspective of "When I use this, I do it this way." This book is the culmination of many years’ experience using the Oracle database, in myriad situations. Ultimately, its goal is to help DBAs and developers work together to build correct, high-performance, and scalable Oracle applications.

Thanks and enjoy!