Archive for the 'Bugs and Errors' Category



When that zero isn’t really a zero

Thursday 8 August 2019 @ 10:24 pm

Last week a customer was really befuddled. He had a formula that said:

if {@field} = 0 then ...

He could see lots of zero values but the formula didn’t work as expected.  He couldn’t figure it out so he sent it to me. The first thing I did was add a few more decimals to see if it was a rounding issue. That didn’t show anything, but the formula still insisted that the value was somehow NOT equal to zero.

So I went into the formula and multiplied the current value by one trillion. Then instead of zeros I started to see some small numbers.  I am not an expert on floating point values or database precision but I have seen this before in reports.  The solution is to round the value in the formula before comparing it to zero. In this case we rounded the value to two decimals like this:

if Round ({field}, 2) = 0 then ....

That made the formula behave as expected.  The odd part is that I have seen the same problem with two other customers in the past week. It could be just a coincidence, but I figured I would mention this and see if this is happening to more people.




Preventing the “division by zero” error

Friday 26 July 2019 @ 9:52 am

Crystal formulas can use 3 different divide operators:

  • Regular divide [ / ]
  • Integer divide [ \ ]
  • Percentage [ % ]

But all of these will fail if you follow the operators with a zero. The report will stop and Crystal will throw the error message “Division by Zero”.  The standard solution is to check and make sure the number you are dividing by is not zero before you do the calculation, something like this:

if {fieldA} = 0
then 0
else {fieldB} / {fieldA}

This way, whenever the bottom of the fraction (denominator) is a zero, the formula will print a zero and NOT try to do the calculation.
But even when customers use this formula pattern I still see the divide by zero error. Some users mistakenly check the top of the fraction (numerator) instead of the bottom (denominator).  Some do it correctly at first, but then change the denominator and forget to change the first line to match.  So I have developed the habit of using Local variables to make things easier. My normal pattern now looks like this:

Local Numbervar n:= {FieldA};//numerator
Local Numbervar d:= {FieldB}; //denominator
if d = 0 then 0 else n/d

This ensures that the value being checked is always the value on the bottom. Some other advantages of this method are:
1) If d is a subtotal or a long expression you only have to enter it in one place.
2) If you have to create a series of similar expressions, like for 12 different months, you can duplicate the first example and you only have to change the values at the top of the formula.




Impossible link in Pervasive SQL (Elliot)

Tuesday 14 May 2019 @ 3:14 pm

I have written before about databases that take selection criteria from Crystal and then use the wrong index so that valid records are missing from the results. The solution is to write the criteria so those rules don’t make it into the SQL WHERE clause. Crystal can then apply that criteria locally so it is done correctly.

But today I ran into a similar problem that didn’t have a simple solution. I was creating a report to read Elliot data. Elliot is what used to be called Macola Accounting. We were connecting to a Pervasive SQL DB using an ODBC connection. We were trying to link the Item file to a second instance of the Item file to get a list of components for manufactured items. What we found is that when we joined the component ID from instance one to the part ID in instance two, the results would not return a single match between the two tables.

Looking at the tables separately showed the matching data was there. And when I tried to filter to a single PartID the results would not find that ID. This is when I realized that we had an index problem like the one I described above.

So I looked at the index tabs in the Database Expert and noticed that this table had two red index tabs, meaning there were two fields in the primary index. The tabs were on Item Number and Sequence number. We were linking from a table where there was a component item number but there was not a component sequence number. It appears that Pervasive SQL defaults to using the primary index for ODBC joins, even if the fields you are using for the join don’t completely match the fields in the index. So the link will fail every time. I even unchecked the option “use indexes or server for speed” to see if that would help, but it didn’t have any affect.

We were lucky that the table we were using had an equivalent view. We linked this view to itself and we were finally able to find matching records. I assume this worked because this view, like most views, are not indexed.




Group sort reverses ascending and descending

Wednesday 8 May 2019 @ 5:13 pm

I solved another occasional mystery today.

Crystal allows you to put your groups in order based on the summary fields that exist for that group. This feature is in the “Report” menu under the label “Group Sort”. So if you group by customer and subtotal sales for each customer you can put the customers in order based on their sum. You can rank the customers this way in either ascending or descending order.

You are allowed to use any type of summary field for group sorting, and on occasion I have used a summary that is based on a date field. This could be the first (minimum) order date of for each customer or the last (maximum) order date for each customer. I have noticed that sometimes when I rank a group based on one of these date summaries that Ascending seems to behave like Descending or vice versa. For instance I might pick Descending and I expect the groups with the latest dates to be first. Sometimes they are and sometimes they aren’t. Before today I had never taken the time to figure out what was going on.  Now I know.

If your summary is the Maximum of a date field, like the last order date for each customer, then setting the group sort to ascending or descending will behave in the expected way. Ascending will put the groups with the earliest summary dates first and descending will put the groups with the later dates first.

But if your summary is the Minimum of a date field, like the first order date for each customer, then group sorting for that field will work in reverse. Ascending will put the groups with later dates first and descending will put the groups with earliest dates first. To get the groups to go the way you want you just have to pick the opposite direction.

A couple of notes:
1) This doesn’t happen with a minimum summary of numbers or strings, just a minimum of dates.
2) You can get the same summary value as the minimum by doing the Nth smallest (with N = 1). If you use the Nth Smallest summary for group sorting it does the ascending/descending the normal way, not reversed like the minimum function.




Hiding part of your criteria from the database

Tuesday 16 April 2019 @ 5:21 pm

I first wrote about this issue a decade ago when it showed up in databases like Paradox, Btrieve and Visual Dbase. But I recently saw it twice in PostGreSQL environments so I am going to write about it again.

The problem occurs when you add a new rule to the selection formula. The main symptom is that the criteria works fine when you click “use saved data” but then when you refresh you lose some valid records. Some people think they have a bad join which can also cause you to lose records, but a bad join will lose records even if you click “use saved data”.

In my experience, the issue is usually connected with an index in the database.  When Crystal Reports sends your new rule to the database, the database tries to use an index to speed things up. But some indexes don’t work correctly for filtering and will cause the report to miss some or all valid records. The solution is to prevent that rule from being sent to the database by forcing the rule to be applied locally. There are three approaches I have used:

1) Starting with CR 2008 (v12) you can put this rule into the “Saved Data” selection formula. This selection formula is always applied locally so the database usually doesn’t see it. Crystal will apply this selection formula to the records that are sent back by the database.

2) The first option may not work in call cases, and of course it won’t work if you use an older version of Crystal that doesn’t have that feature.  In that case you have to use the method I wrote about in 2007. You write the selection formula in a way that forces Crystal to apply that rule locally.  If the report is SQL based you want to prevent Crystal’s SQL generator from converting that rule into the SQL WHERE clause.  The most reliable way I have found is by putting the database field inside a Crystal function. For instance if your problem rule uses  a numeric field in the selection formula like this:

{table.Amount} > 50

I would write it as

Round({table.Amount},2) > 50

Or if your selection formula is:

{Transaction.Code} = “ABC”

I would write it as

Left({Transaction.Code},3) = “ABC”

Applying a function to a database field in the selection formula will usually prevent that rule from being converted into the SQL.  In some cases I write the rule as a separate formula field called {@Criteria} and then reference {@Criteria} in the selection formula by adding a line to the selection formula like:

…. and {@Criteria}

Note that this is the exact opposite of what you normally want to do. Normally we want ALL of the record selection formula to be sent to the database or convert into the SQL WHERE clause.  It makes your query more efficient.  When I have a slow report one of the first things I check is for functions in the selection formula.  But it doesn’t help to be efficient if the database can’t handle the rule correctly.

3) I recently had one case where neither of the above options solved the problem. No matter how I wrote the criteria formula, as soon as it was part of the selection formula I would lose records.  So we gave up on selection and used suppression instead.  This is a last resort.  You don’t use the {@Criteria} formula in the selection formula at all.  Without the rule the report will include some records that you don’t want. You then suppress the unwanted records with a suppression formula like this:

not {@Criteria}

When you use this method you also have to make sure these records aren’t included in any totals. This means writing a formula that only includes the records that you want and then doing all totals on these formulas. For instance I could write a formula like this:

if  {@Criteria}
then {table.Amount}
else 0

If I total this formula it won’t matter that I have suppressed records in the report. I don’t see them and they aren’t in the totals.




Solving problems when reporting on CSV files

Monday 8 April 2019 @ 6:39 pm

You can create Crystal Reports that directly read CSV files using the Access/Excel(DAO) connection.  Just keep in mind that CSV files don’t always make an ideal data source. Like spreadsheets, CSV files don’t have set data types for each column. This can cause data type ambiguity which might cause you to lose some data. And in some cases the report will read the first record of the CSV file as the column headings, removing the first record of data from the dataset.

But here CSV files have one advantage over XLS files. CSV files allow you to introduce a Schema.ini file to define the data type for each column in the CSV. This is something you can’t do with spreadsheets. The schema.ini file is a simple text file that sits in the same folder as your CSV. There are many attributes available in schema.ini, but you only need to use the attributes that you need.  The other attributes will be set based on defaults stored in the registry.  Here are the most common problems I find that can be solved with a schema.ini file.

  1. The CSV is reading the first row of data as column headings
  2. Columns read as the wrong data type
  3. Character column is read as numeric and shows only the numeric values
  4. The columns are parsed using the wrong character

Here is an example of a schema.ini that defines two different CSV files in the same folder:

[sample1.csv]
Format=CSVDelimited
ColNameHeader=False

[sample2.csv]
Format=CSVDelimited
ColNameHeader=False
Col1=OrderDate date
Col2=Amount long
Col3=CustID text
Col4=CustName text
Col5=CustCategory text

As you can see, a single INI file can define multiple CSV files when they are in the same folder. Each file gets it’s own section of the INI file. Both files are set to be read as comma delimited.  Both files are set to NOT treat the first row as column headings. In the first file we allow the driver to name the fields (usually A, B, C, etc) and determine the data type automatically. In the second file we name each column and assign each a data type.

So if you are reporting on a CSV file and running into issues, using a Schema.ini file may help solve the problem.




Windows update breaks Raiser’s Edge reports

Saturday 9 February 2019 @ 6:20 pm

I have several customers that use the donor tracking software Raiser’s Edge(RE) and pull data out of it with Crystal Reports. To run CR against RE data usually involves exporting the data to an MS Access (MDB) file and then reading that MDB with Crystal. The challenge is that Microsoft doesn’t really support the classic MDB format anymore.

This week I heard from several RE users that a recent Windows update has broken the process. They go to run the report and get an error that the file format is in an “unrecognizable database format”.

One customer was able to resolve the problem by changing the export from “Blackbaud Report Writer Database (MDB)” to “MS Access 2000 Database” but there is some concern about making this change. Some users have said that these exports work fine when you run a report from Crystal, but that these reports will not always run fine from within the RE application menu. Others have had success running reports from these exports in both environments. I haven’t found the specific difference but I suspect that it may have to do with the version of the ODBC driver being used.

I will post more information as it comes in.




Minor changes can have a major impact on performance

Monday 7 January 2019 @ 11:54 pm

Over the holiday break I had a customer contact me about a report that had just started taking a very long time to run. The first place I looked was in the record selection formula where I found this in the second line:

{Orders.OrderDate} -1 in LastFullWeek

I suspected that this was the problem. To confirm I had him send me the original report that ran in the normal time. Here is what the original said:

{Orders.OrderDate} in LastFullWeek

Apparently the requirement for the report had changed from the prior week starting on Sunday to the prior week starting on Monday. That minor change causes Crystal to completely drop the date rule from the automatically generated SQL. This means the database will send back ALL dates and Crystal will have to apply the date filter locally. I had him try this instead:

{Orders.OrderDate} in Minimum(LastFullWeek) +1 to Maximum(LastFullWeek) +1

Both version 1 and version 3 return the same results but version 1 adjusts the field while version 3 adjusts the comparison values. Version 3 will make it into the SQL WHERE clause while version 1 will not.

The same problem happens when you use a function on the field. Here are two common examples I see:

Date ({Orders.CreateTimeStamp} ) in ...

Round ( {Orders.Amount} ) = ...

If the report performance is fine than these examples can stay, but if you need to speed up the report then these should be written without the functions, so that they are incorporated into the automatically generated SQL.




SQL Server dates show up as strings

Wednesday 14 November 2018 @ 4:57 pm

Before SQL server 2008 all date values were stored as DateTime values, even if you didn’t need the time portion. Starting with SQL Server 2008 you could a column either as a Date (with no time) or a DateTime. But I have noticed, recently, that anytime I create a field with a “Date” type, Crystal sees the field as a string instead of a date. So even though I usually don’t need time values, I typically create my table fields and calculations as DateTimes. That way Crystal can format the fields with date options and do date calculations in the report.

But one of my customers recently asked me about this. She found that this only happens if you use the SQL Server ODBC driver. Apparently, the SQL Server Native Client doesn’t convert date fields to strings. So I did a test by creating two DSN’s to a test database and a test table. One DSN uses the SQL Server ODBC driver (10.00.17134.01) from 2018. The other uses the SQL Server Native Client 11 (2011.1102100.60) from 2011. Sure enough, a report using the Native Client maintained the date value as a date, while the ODBC driver converted it to a string.

Then I read this page where Microsoft now recommends using OLEDB:

When I first tried OLEDB I saw two providers. They gave me the same two results as above, which told me that these providers were using the same two drivers I had just tested with ODBC. That is when I realized that the article was talking about a newer OLEDB driver. I downloaded and installed this driver but if you aren’t careful it is easy to miss it in the list of providers. It looks very much like the old one. The only difference between the new one and the old one is that the new one uses the word “driver” while the old one uses the word “provider”.

Microsoft OLE DB Driver for SQL Server (MSOLEDBSQL)
Microsoft OLE DB Provider for SQL Server (SQLOLEDB)

The name in parens is what you see under connection “properties” in Crystal’s “Set Datasource Location” window. When I used the new MSOLEDBSQL driver I got date values.

And, thanks to Laurie Weaver, a developer at Wyse Solutions, for letting me know this behavior was driver related.




Improving report performance with a subreport?

Friday 26 October 2018 @ 9:54 am

In most cases, subreports are a last resort. Typically they slow things down by adding an extra query to the process. But this week I found that moving some tables to a subreport actually sped things up.

The data came from the fundraising software Raisers’ Edge, which uses data exported to an MDB. The customer had designed a new report and found that it ran for over an hour without completing the query. Nothing looked wrong in the structure so I did some troubleshooting. I started with one table and then added the other tables a few at a time to see which table was the problem. All was fine until I reached the last 17 tables which were all linked back to a single table. We only needed one record from each of the 17 tables and they all had about 500 records.

I was able to add the first three tables without issue, but beyond that the report would slow down more with each table added. It only took a few more tables to realize that we couldn’t add all 17 tables to the report and expect it to complete. I double checked the links, confirmed the indexes were in place and still couldn’t find any cause for the slowdown.

Finally, I removed those tables from the report and created a subreport that included just those tables. I also included the table that linked them all together. The subreport ran instantly both on it’s own and when inserted in the main report. My guess is that the MS Access engine was struggling with the number of joins, so splitting them into two separate queries made it more manageable.




«« Previous Posts
Jeff-Net

Recrystallize Pro

Crystal Reports Server