Oracle Scratchpad

October 11, 2016

#ThanksOTN

Filed under: Non-technical,Oracle — Jonathan Lewis @ 6:57 pm GMT Oct 11,2016

To mark the OTN Appreciation Day I’d like to offer this thought:

“Our favourite feature is execution plans … execution plans and rowsource execution statistics … rowsource execution statistics and execution plans …  our two favourite features and rowsource execution stats and execution plans … and ruthless use of SQL monitoring …. Our three favourite features are rowsource execution stats, execution plans, ruthless use of SQL monitoring and an almost fanatical devotion to the Cost Based Optimizer …. Our four … no … amongst our favourite features  are such elements as rowsource execution statistics, execution plans …. I’ll come in again.”

With apologies to Monty Python.

 

 

 

August 16, 2016

Month End

Filed under: audit,CBO,Non-technical — Jonathan Lewis @ 1:04 pm GMT Aug 16,2016

A question about parallel query and cardinality estimates appeared on OTN a little while ago that prompted me to write this note about helping the optimizer do the best job with the least effort.  (A critical point in the correct answer to the original question is that parallel query may lead to “unexpected” dynamic sampling, which can make a huge difference to the choice of execution plans, but that’s another matter.)

The initial cardinality error in the plan came from the following predicate on a “Date dimension” table:


      AR_DATE.CALENDAR_DT   = AR_DATE.MONTH_END_DT 
AND   AR_DATE.CALENDAR_DT  >= ADD_MONTHS(TRUNC(SYSDATE,'fmyy'), -36) 
AND   AR_DATE.MONTH_END_DT >= ADD_MONTHS(TRUNC(SYSDATE,'fmyy'), -36)

In the parallel plan the estimated number of rows on a full tablescan of the table was 742, while on the serial plan the same tablescan produced a cardinality of 1. You will appreciate that having an estimate of 1 (or less) that is nearly three orders of magnitude wrong is likely to lead to a very bad execution plan.

My first thought when I saw this was (based on a purely intuitive interpretation): “there’s one day every month that’s the last day of the month and we’re looking at roughly that last 36 months so we might predict a cardinality of about 36”. That’s still a long way off the 742 estimate and 1,044 actual for the parallel query, but it’s a warning flag that the serial estimate is probably an important error – it’s also an example of the very simple “sanity checking” mental exercises that can accompany almost any execution plan analysis.

My second thought (which happened to be wrong, and would only have been right some time well before version 10.2.0.5) was that the optimizer would treat the add_months() expressions as unknown values and assign a selectivity of 5% to each of the predicates, reducing the combined selectivity to 1/400th of the selectivity it gave to the first predicate. In fact the optimizer evaluates the expressions and would have used the normal (required range / total range) calculation for those two predicates.

It’s the first predicate that I want to examine, though – how does the optimizer calculate a selectivity for it ? Here’s some code to generate sample data to use for testing.


rem
rem     Script:         month_end.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem

create table t1
nologging
as
select
        rownum                                                   id,
        to_date('01-Jan-2010','dd-mon-yyyy') + rownum - 1       calendar_date,
        add_months(
                trunc(to_date('01-Jan-2010','dd-mon-yyyy') + rownum - 1 ,'MM' ),
                1
        ) - 1                                                   month_end_date
from
        dual
connect by
        level <= trunc(sysdate) - to_date('01-jan_2010','dd-mon-yyyy') + 1
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

This clunky bit of code gives me consecutive dates from 1st Jan 2010 up to “today” with the month_end_date column holding the month end date corresponding to the row’s calendar_date. So now we can check what the optimizer makes of the predciate calendar_date = month_end_date:


set autotrace on explain

select count(*) from t1 where calendar_date = month_end_date;

  COUNT(*)
----------
        79

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    16 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    16 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CALENDAR_DATE"="MONTH_END_DATE")

Looking at operation 2 we can see that, in effect, the optimizer has considered two independent predicates “calendar_date = {unknown}” and “month_end_date = {unknown}” and taken the lower of the two selectivities – which means the cardinality estimate is 1 because the calendar_date column is unique across this table.

There are various ways to work around problems like this. One of the simplest would be to tell Oracle to sample this table with the (table-level) hint /*+ dynamic_sampling(t1 1) */; in fact, since this predicate is effectively treated as an examination of two predicates the (cursor-level) hint /*+ dynamic_sampling(4) */ would also cause sampling to take place – note that level 4 or higher is required to trigger sampling for “multiple” predicates on a single table. As a general guideline we always try to minimise the risk of side effects so if this problem were embedded in a much larger query I would prefer the table-level hint over the cursor-level hint.

There are other options, though, that would allow you to bypass sampling – provided you can modify the SQL. The script I used to create this table also included the following statement:


alter table t1 add (
        date_offset1 generated always as (calendar_date - month_end_date) virtual,
        date_flag generated always as (case when calendar_date - month_end_date = 0 then 'Y' end) virtual
);

In 12c I would declare these virtual columns to be invisible to avoid problems with any SQL that didn’t use explicit column lists. For demonstration purposes I’ve set up two options – I can find the rows I want with one of two obvious predicates:

    date_offset1 = 0
    date_flag = 'Y'

In fact there’s a third predicate I could use that doesn’t need to know about the virtual columns:

    calendar_date - month_end_date = 0

Unfortunately I can’t arbitrarily swap the order of the two dates in the last predicate, and the optimizer won’t spot that it is also equivalent to “calendar_date = month_end_date”. Here are a few execution plans – for which the only significant bit is the cardinality estimate of the full tablescans:


select count(*) from t1 where date_flag = 'Y';

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     4  (25)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    79 |   158 |     4  (25)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATE_FLAG"='Y')



select count(*) from t1 where date_offset1 = 0;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    78 |   312 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATE_OFFSET1"=0)



select count(*) from t1 where calendar_date - month_end_date = 0;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    78 |   312 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."DATE_OFFSET1"=0)

It’s interesting to note that the optimizer has transformed the last predicate into the equivalent virtual column expression to do the arithmetic. You might also note that the date_flag option is slightly more accurate, but that’s because it’s based on an expression which is null for the rows we don’t want while the date_offset1 column has a value for every row and a little bit of arithmetical rounding comes into play. You might also note that there’s a small cost difference – which I’d ascribe to the CPU cost that the optimizer has added for the CASE expression being applied on top of the simple date arithmetic.

Of course, whatever else you might play around with when working around a cardinality problem like this, I think the strategic aim for a data warehouse system would be to get a REAL flag column on the table and populate it at data loading time if month-end dates played an important part in the proceedings – though I have to say that the virtual flag column is one I do rather fancy.

 

July 6, 2016

Centenary

Filed under: Non-technical — Jonathan Lewis @ 5:02 pm GMT Jul 6,2016

I rarely blog about anything non-technical but after the events last Friday (1st July) I wanted to say something about the pride that I shared with several hundred parents around the country as they saw the effect their offspring created through a living memorial of the terrible waste of life that happened  a hundred years ago on 1st July 1916 when some 70,000 soldiers (a very large fraction of them British) were killed or injured on the first day of the battle of the Somme.

While a memorial service was being held at Thiepval – a monument to 72,000 British (Empire) soldiers who died in the battle of the Somme but have no known grave – 1,500 “ghosts of the Somme” were silently wending their way in small groups through the streets, shopping centres, and train stations of cities across the UK, pausing to rest from time to time and occasionally bursting into the song of the trenches: “We’re here because we’re here”.

Each “ghost” represented a specific solder killed on the first day of the Somme and if you approached one of them to ask what was going on their only response was to look you in the eye and hand you a card stating their name, rank, regiment and, where known, the age at which they died.

Although many of the posts and tweets about the event mention the collaboration and assistance of various theatre groups around the country almost all of the soldiers were simply people who had responded to an advertisement for Project Octagon and had spent time over the previous 5 weekends rehearsing for the event. My son Simon was one of the volunteers who was on the London beat, starting with the morning commuters at Kings Cross then trekking around London all day – in hobnailed leather boots – to end at Waterloo station for the evening commuters.

After hours of walking this was how he appeared at Waterloo at the end of the day:

IMG_0877

Like me he normally sports a beard and moustache but he’d shaved the beard off and trimmed the moustache to the style of an older era. The absent, dazed, look is in character for the part but also, I think, an indication of exhaustion, both physical and emotional. I wasn’t even sure he’d realised I was crouching six feet away to take this photo until I asked him about it the following day. When I showed the picture to my wife it brought tears to her eyes to think that 100 years ago that might have been the last sight of her son she’d see before he went off to die – it’s a sentiment that appeared more than once on Twitter through the day.

Shortly before 6:00 pm several more groups converged on Waterloo for a final tableau giving us a rendition of “We’re here because we’re here” that ended in an agonised scream:

IMG_0872

It’s a gut-wrenching thought that a group that size would have been killed roughly every 6 minutes, on average, on the first day of the Somme though, realistically, the entire 1,500 that volunteered for the day would probably have died in the first few minutes of the first wave.

Behind the Scenes

There was no announcement of this living memorial so throughout the day people were asking where the soldiers came from and who had organised the event. Finally, at 7:00 in the evening 1418-Now identified themselves as the commissioning body, with Jeremy Deller as the artist in collaboration with Rufus Norris of the National Theatre.

Like any military operation, though, between the generals at the top and the troops at the bottom there was a pyramid of personnel connecting the big picture to the final detail. Under Jeremey Deller and Rufus Norris there was a handful of key characters without whom the day would have been very different. I can’t tell you who they all were but I’m proud to say that one of them was my daughter Anna who, along with a colleague, spent a large fraction of the last 16 months in the role of “Lead Costume Supervisor ” preparing for the day. Under the pair there were several regional costume supervisors, and each costume supervisor was responsible for several dressers who would have to help the volunteers put on the unfamiliar battledress.

Despite working on the project for 16 months Anna told me very little about what was going on until the day was over, and this is a thumbnail sketch (errors and omissions are my fault) of what she’s told me so far.

Amongst other things she selected a list of names from the soldiers who had died on the first day of battle, recording their rank, regiment, battalion and, where known, age. She then had to find out exactly what kit each battalion would have been wearing on the day, allowing for some of the variation that would have appeared within each battalion and catering for the various ranks; then she had to find a supplier who could make the necessary uniforms in a range of sizes that would allow for the variation in the build of the (as yet unknown, unmeasured) volunteers.

As batches of uniforms arrived each one had to be associated with its (historic) owner and supplied with 200 cards with the owner’s details – and it was really important to ensure that the right name was attached to a uniform before the uniforms could be dispatched around the country. Ideally a uniform would arrive at a location and find a volunteer who was the right size to wear it, with the right apparent age to match the card that went with the uniform; inevitably some uniforms had to be moved around the country to match the available volunteers.

The work didn’t stop with the uniforms being in the right place at the right time, of course. There aren’t many people alive who know how to dress in a British Army uniform from 1916 – so Anna and her colleague had to create a number of videos showing the correct way to wear webbing, how to put on puttees, etc. The other problem with the uniforms was that they were brand new – so they had to be “broken down”. That’s a lot of work when you’ve got 1,500 costumes. In part this was carried out by the volunteers themselves who spent some of their rehearsal time wearing the costumes while doing energetic exercises to wear them in and get them a little grubby and sweaty; but it also meant a lot of work for the dressers who were supplied with videos showing them how to rub (the right sort of) dirt into clothes and how to rough them up and wear them down in the right places with wire brushes etc.

One of the bits of the uniform you probably won’t have seen – or even if you saw it you might not have noticed it – was the T-shirt: the army uniform of the day would have been rather sweaty, itchy and uncomfortable on a hot summer’s day, so the soldiers weren’t wearing the real thing. Anna and her colleague designed a T-shirt that looked like the front of the shirt the troops should have worn under their battledress made of a material that was thinner, softer and much more comfortable than the real thing. In the end the day wasn’t as hot as expected so very few volunteers seemed to unbutton their tops – but if they had done so the T-shirts would have appeared to be the real thing.

Walking the Walk.

Apart from the authenticity of the uniforms another major feature of the day was the way that the ghosts made their way around from place to place silently, in single file, with no apparent references to maps (or satnav). Every group had a carefully planned route and timetable and two stage managers wearing brightly coloured backpacks so that they could be seen easily by the soldiers but, since one who walked 50 metres ahead and one 50m behind, were unlikely to be noticed by anyone who wasn’t looking. The stage managers were following carefully planned and timetabled routes – allowing the soldiers to stay in character all the time.

You may have seen pictures of the troops on the various underground trains – that’s just one demonstration of the level of detailed planning that went into the day. With a tight timetable of action and previous communications to station masters and other public officials to ensure that there would be no hold-ups at critical points the lead stage manager could (for example) get to a station guard to warn them of the imminent arrival of a squad, show them the necessary travel cards, and get the gate held open for them. No need for WW1 ghosts to break character by fumbling for electronic travel cards, just a silent parade through an open gate.

Just as Anna was the Lead Costumer Supervisor, there was a Lead Stage Manager with a cascade of local route masters beneath her. She was based in Birmingham and was responsible for working out how to make the timetabling and routing possible, using her home town as the test-bed for the approach, then briefing the regional organizers who applied the methods to prepare routes and handle logistics for their own locations.

End Results

To the people of London and Manchester and Belfast and Swansea and Penzance and Shetland and half a dozen places around the UK, it just happened: hundreds of ghosts of the past appeared and wandered among us. The uniforms were “real”, the journeys from place to place were “spontaneous” and “unguided”, and the ghosts were haunting. To most of us “it just happened” but behind the scenes the effort involved in preparation, and the attention to detail was enormous.

Between the “headline” names at the top of the pyramid and the highly visible troops on the ground at the bottom, it took the coordinated efforts of about 500 people to turn a wonderful idea into a reality that moved millions of people in their daily lives.

 

If you want to see more images and comments about the day you can follow the hashtag #wearehere and there is a collection of instagram images at http://becausewearehere.co.uk/  and if you’re in the London area on 11th July and want to hear more about the instigation and implementation of the day there’s an evening event on at the National Theatre on Monday 11th July featuring Jenny Waldman, Jeremy Deller and Rufus Norris discussing the event with a BBC correspondent.

 

 

May 10, 2016

Speaker Scores

Filed under: Non-technical — Jonathan Lewis @ 1:10 pm GMT May 10,2016

I published a note this morning that I drafted in January 2015, and I didn’t notice that it had gone back in time to publish itself on the date that I first drafted it – and it’s already been tweeted twice so I can’t move it. So this is a temporary link to pop it to the head of the queue while leaving it where it first appeared.

January 18, 2015

Speaker Scores

Filed under: Non-technical — Jonathan Lewis @ 11:47 am GMT Jan 18,2015

This is a note I drafted early in 2015 but, apparently, failed to publish. I rediscovered it this morning while searching for something else that I needed for an abstract I was submitting, so I thought I’d post it to see what people thought.  (For reasons I cannot explain the article has retroactively published itself on the date that I first drafted it even though I published it on 10th May 2016.)

There was a brief conversation on the Oak Table List when the UKOUG Tech 14 scores came out about the fact that the UKOUG used a range of 1 – 6 for rating speakers when the rest of the world used 1 – 5. Personally I think 1 – 5 is a bad idea and, as a speaker and an organiser, I think 1 – 6 is better and what I’d really like is 1 – 10. (In fact it’s possible that the UKOUG uses 1 – 6 as a consequence of a remark I made a few years ago when I was on the board of the UKOUG.)

Here’s how I’d argue my case. If you have 1 – 5 then, when supplying a rating, your choices are (nominally):

  1. Awful
  2. Bad
  3. I stayed to the end but don’t have any particular opinion
  4. Good
  5. Fantastic

The range of possibilities is too low: where’s the rating for “a couple of nice points, but not particularly good”, or “I wish I’d gone to something else, but it’s not so bad I want to walk out”. With a range of 1 – 6 you’re denied the possibility of an uninformative default of 3; and have a range of three ranks of “good” and three of “bad”.

Note, also, that I said “nominally” when listing the choices – unfortunately it’s a well-known effect that the extreme ends of a range of ratings tend to be automatically excluded, and the value viewed as meaning “average” is some way above the mid-point of the range (another hand-waving reason there for not having a mid-point value). If you set a range from 1 – 10, the actual values will tend to fall in the range 4 – 9, which gives you 6 values where you can get a proper feeling for how well the presentation was received, and if you get a 10 you know it went down well (which isn’t necessarily the same as it being a good presentation, of course) and if you get less than 4 you know you’ve got something to worry about.

Footnote:

In my dim and distant past I was a school teacher (teaching maths and computer science to the 11 – 18 age group) and the rating system for reports where I taught was two-part: A-E for effort, 1-5 for results. Nominally C-3 meant that your effort and results were typical of your year group. Realistically any pupil who got a C-3 was hugely offended and the average rating given was more like B-2.

(Inevitably there was at least one pupil I taught who spent his time trying to get his report card filled with E-1 for every subject.)

 

Your comments are welcome – and if there are enough then they may be of some assistance to the people who organise and speak at the conferences you attend. In fact, with that thought in mind I’ll even put up a poll on the three ranges I’ve mentioned in this article.

December 22, 2014

Learning

Filed under: Non-technical — Jonathan Lewis @ 5:28 pm GMT Dec 22,2014

I received an email a few weeks ago asking me if I would look at a series of three posts on adaptive dynamic sampling in 12c – (part 1, part 2, part 3). I took a note of the topic and URLs, and read through them fairly rapidly, and they seemed to be perfectly reasonable articles describing the authors thoughts, tests, and observations. Inevitably, though, several questions ran through my mind as I read – typically along the lines of “what would happen if …”, “did you restart the instance before …”, “did you flush the shared pool between …”. It’s very hard to create, run, and report a set of tests that allow you to make solid inferences about how Oracle and the optimizer behave – and whatever you do someone else will find a way of asking some questions that push the envelope a little further.

I got a follow-up email a week or so later, asking if I’d had time to look at the article because the author wanted to present the topic at a local (Azerbaijan) user group event – and the follow-up email prompted me to write this blog note.

Given that it’s taken about three weeks for me to get around to writing this note you might appreciate that I don’t have a lot of time to spare on a topic that I’m not actively pursuing for a good reason. I accumulate a lot of information from around the internet, from books, and from presentations, and I’ll invariably attach lots of questions and conjectures to that information but I won’t necessarily be able to have any confidence in its utility or correctness. When I have to know an answer I may go back to a source I’ve noted and use it as a basis for doing some further goal-oriented investigations – but until I need to (or unless I’m particularly curious) I don’t have the time to do arbitrary research. This means, of course, that I don’t have time to get into a dialogue with people about the work they have done and the information they have presented.

On the other hand, of course, I thoroughly approve of anyone who takes the time to do the experiment and write up the results. And I heartily approve of anyone who is prepared to stand up in front of a user group and share their observations and want to encourage people to do this; so this is the reply I sent which, I hope, is suitable positive:

There are a number of questions that I would probably want answered if I were to examine the posts in detail, and I would want to repeat the tests that you have done to check that I got the same results and to see if there were other observations I could make that might lead me to different conclusions. This is something that takes far too much time to do properly, and I have no inside information that would allow me to say very quickly whether your comments are right, wrong, or simply incomplete.

The important point, though, is that you have set up some tests, documented the results, and offered some conclusions. That is sufficient for your presentation to the user group.
  • This is what I did
  • This is what I observed
  • This is what I concluded
 At worst someone will ask some questions like: “Did you check …”, or “What happens if …”
You may need to answer “I don’t know”, or “I didn’t think of doing that”, or “That’s a good idea, I’ll try it”; but whatever happens everyone learns something, and everyone has the opportunity to learn more based on what you have given them.
In passing, at the very first presentation I did for the UK Oracle User Group about 25 years ago I got about half way through some comments on an odd performance pattern I had observed when modifying the SQL*Plus arraysize when someone stuck up their hand and asked: “Did you check the tcp packet size?” The only response I could make, after a few seconds pause to think, was: “I should have thought of that. No.”

August 6, 2013

Interview

Filed under: Non-technical — Jonathan Lewis @ 6:59 am GMT Aug 6,2013

The interview I did with Timur Akhmadeev while visiting Moscow is now online. 90 minutes ! I’ve just got through the first 6 minutes and haven’t embarrassed myself yet:

If you haven’t got the time to listen to the recording, there’s now a transcript online.

July 4, 2013

Tweet

Filed under: Non-technical — Jonathan Lewis @ 6:04 am GMT Jul 4,2013

I don’t know why I ever agree to go anywhere near Doug Burns – he usually manages to persuade me into doing things I don’t want to. This time (at a meeting of the London Oracle Beer group) he’s persuaded me that I really should join twitter. So I have (jloracle) – and found that I was being followed by four people before I even created an account, and was advised that I’d really, really, like to follow:

  • Jack Rivera
  • Justin Bieber
  • Jennifer Lopez
  • Katy Perry
  • Gwen Shapira

I had no idea who Jack Rivera might be, though I did recognise the next three names from those annoying ads that seem to appear on all sorts of news feeds. The one that baffled me was Gwen Shapira – by what mechanism did twitter manage to connect my name/tag/email address with someone relevant ?

Anyway, thanks, Doug – now I have to start thinking of something intelligent, perceptive or witty in 140 characters or less.

 

(I just got the Jennifer Lopez link –  JLOracle = J-Lo !)

 

 

 

May 13, 2013

ISS

Filed under: Non-technical — Jonathan Lewis @ 6:45 pm GMT May 13,2013

I’d like to dedicate this posting to fellow Oak Table member Richard Foote, for reasons that the readers we have in common will immediately recognise: http://www.youtube.com/watch?v=KaOC9danxNo

The singer is Canadian astronaut Commander Chris Hadfield who has been tweeting and posting pictures from space – be careful, you may get hooked: https://twitter.com/Cmdr_Hadfield/status/332819772989378560/photo/1

Update:

When I posted the link to the video it had received 1.5M views; less than 24 hours later it’s up to roughly 7M. (And they weren’t all Richard Foote). Clearly the images have caught the imagination of a lot of people. If you have looked at the twitter stream it’s equally inspiring – and not just for the pictures.

 

October 29, 2012

Help Yourself

Filed under: Non-technical — Jonathan Lewis @ 5:17 pm GMT Oct 29,2012

When people ask for help on (for example) OTN, they are often asked to supply further information – sometimes in the form of requests for results from SQL queries. If you are ever in this position, you may find that you don’t understand what the query does, or why the information is useful – nevertheless you can still do something to make it as easy as possible for your potential saviour to help you.

Here’s an example to show you how NOT to do it:

(more…)

August 29, 2012

Packt

Filed under: Non-technical — Jonathan Lewis @ 7:31 am GMT Aug 29,2012

Here’s the content of an email I sent to Packt back in February this year:

Please ensure that I don’t hear from Packt again.

I have been approached twice in the past and explained that I don’t have the time, and I’m not interested in reviewing books where I have had no involvement with the authors.

This elicited an apology, of course, then on 4th August (after two more pieces of spam from them) I sent them another email quoting the above with the following introduction:
(more…)

November 18, 2011

Planet Earth

Filed under: Non-technical — Jonathan Lewis @ 1:52 pm GMT Nov 18,2011

I know it’s another post that’s not about Oracle, but someone sent me this video link a couple of days ago and it’s too wonderful not to share. (I’ve just got back from Iceland, so the Aurora Borealis at 1:05 is particularly relevant)

August 26, 2011

Big Numbers

Filed under: Non-technical — Jonathan Lewis @ 5:57 pm GMT Aug 26,2011

It’s hard, sometimes, to get an instinctive idea of how big a “big number” really is. I’ve just heard a brilliant description of a billion (American style: 109) that really gives you a gut feeling for how big the number is:

If you owed someone a billion dollars, and paid it back at the rate of $1 per second – how long would it take to pay off the debt (and don’t even think about the interest accruing) ?

The answer is 34 years.

The way I read it, that turns a big number into a number that’s small enough to comprehend but big enough to feel enormous.

April 29, 2011

Upgrades

Filed under: Non-technical,Oracle,Upgrades — Jonathan Lewis @ 6:06 pm GMT Apr 29,2011

Here’s a link to a truly ambitious document on Metalink (if you’re allowed to log on):

Doc ID 421191.1: Complete checklist for manual upgrades of Oracle databases from any version to any version on any platform

(Actually it only starts at v6 – but I don’t think there are many systems still running v5 and earlier).

April 24, 2011

Helpdesk

Filed under: Non-technical — Jonathan Lewis @ 10:56 am GMT Apr 24,2011

How to ensure that you never give the wrong answer – as demonstrated by my bank’s telephone help system:

Stage 1 – after calling in through a special high-cost number, of course:

“{long message extolling the virtues of using the bank’s internet system for all your requirements}”

Stage 2 – you wait for the list of options to be recited

“Press 1 for …”
“Press 2 for …”

“Press 9 to speak to a member of the helpdesk”

Stage 3 – you decide the only relevant option is to press 9 to speak to a person:

“I’m sorry, I don’t understand what you want.” click, brrrrrrrrrrrrr …

Maxim: If you don’t want to fail, don’t even try.

Next Page »

Blog at WordPress.com.