Archive for April, 2019
Crystal has a function that calculates the drill-down level of the current window. This allows you to have objects and sections format differently at specific levels of drill-down. The function is called DrilllDownGroupLevel and it gives you a numeric value that tells you the current drill-down level for any preview tab. The normal preview is level 0, the first drill-down is level 1, etc. So if you want a section with column headings to appear at the second level of drill-down and ONLY at the second level of drill-down you can use the following as a suppression formula for that section:
DrilllDownGroupLevel <> 2
That section will be suppressed in normal preview (drill-down level 0) and also at the first drill-down level, but will appear if you drill down again from level 1. If you are ever unsure which level of drill-down you are on you can write a formula that says simply:
DrillDownGroupLevel
Place this formula in the section in question. When you see that section the number shown will tell you the current drill-down level.
DrillDownGroupLevel also helps you solve one of the dilemmas of report design. One of the lessons in my advanced class is how to have a report be either a detailed report or a summary report, based on a parameter prompt. You accomplish this with a parameter that has two choices (Summary/Detail) and you use that parameter in the detail section’s suppress formula. It has to be the suppress property because the “hide” property doesn’t have a condition formula button. And, because you are using the suppress property you can no longer drill-down from the summary version of the report to see the details for a group.
But by using the following as your suppress formula you can have the parameter to choose summary or detail, and still have drill-down available on the summary version:
{?Parameter} = "Summary" and
DrillDownGroupLevel <> 1
This will suppress the section when you choose “Summary”, but only as long as you are in the main preview tab. As soon as you drill down the suppress condition will no longer be met and the details will be visible.
There are two ways to use today’s date in a Crystal formula. These are useful when you want the report to automatically determine a date range, like the last three days. You can calculate the date for three days ago by using one of the following calculations:
CurrentDate - 3
or
DataDate - 3
These expressions will generate the same value at the time the report is refreshed. But if you interact with reports after refreshing them, or if you open reports with saved data, it is important to know how these functions differ.
CurrentDate is identical to the functions PrintDate and Today. PrintDate is also a special field. These date function will all update whenever one of three things happens. When the report is:
1) opened
2) previewed after a modification.
3) printed
Interestingly, exporting a report to a PDF does NOT update these dates.
DataDate is a function and is also a special field. These are updated when the the report is refreshed.
So lets say I ran a report yesterday, saved it with data yesterday and then reopened it today without refreshing it. The CurrentDate function will show today’s date while the DataDate function will show Yesterday’s date.
You can see how these functions could affect record selection. If I calculate criteria to include the last 3 days using the CurrentDate function, and then reopen the report later with saved data, the CurrentDate will change and so will the criteria. The saved data will be reduced or eliminated. However, if I wrote the same criteria using DataDate there is no change because I have not refreshed the report. So when deciding to use one of these functions you should think about how the formula should respond when you reopen the report with saved data. If it should use date when the report is opened then use CurrentDate. If it should use the date when it was refreshed then use DataDate.
Also note that there are corresponding time functions (CurrentTime, PrintTime) and special fields that have the same names. The time functions follow the same pattern as the date functions.
I first wrote about this issue a decade ago when it showed up in databases like Paradox, Btrieve and Visual Dbase. But I recently saw it twice in PostGreSQL environments so I am going to write about it again.
The problem occurs when you add a new rule to the selection formula. The main symptom is that the criteria works fine when you click “use saved data” but then when you refresh you lose some valid records. Some people think they have a bad join which can also cause you to lose records, but a bad join will lose records even if you click “use saved data”.
In my experience, the issue is usually connected with an index in the database. When Crystal Reports sends your new rule to the database, the database tries to use an index to speed things up. But some indexes don’t work correctly for filtering and will cause the report to miss some or all valid records. The solution is to prevent that rule from being sent to the database by forcing the rule to be applied locally. There are three approaches I have used:
1) Starting with CR 2008 (v12) you can put this rule into the “Saved Data” selection formula. This selection formula is always applied locally so the database usually doesn’t see it. Crystal will apply this selection formula to the records that are sent back by the database.
2) The first option may not work in call cases, and of course it won’t work if you use an older version of Crystal that doesn’t have that feature. In that case you have to use the method I wrote about in 2007. You write the selection formula in a way that forces Crystal to apply that rule locally. If the report is SQL based you want to prevent Crystal’s SQL generator from converting that rule into the SQL WHERE clause. The most reliable way I have found is by putting the database field inside a Crystal function. For instance if your problem rule uses a numeric field in the selection formula like this:
{table.Amount} > 50
I would write it as
Round({table.Amount},2) > 50
Or if your selection formula is:
{Transaction.Code} = “ABC”
I would write it as
Left({Transaction.Code},3) = “ABC”
Applying a function to a database field in the selection formula will usually prevent that rule from being converted into the SQL. In some cases I write the rule as a separate formula field called {@Criteria} and then reference {@Criteria} in the selection formula by adding a line to the selection formula like:
…. and {@Criteria}
Note that this is the exact opposite of what you normally want to do. Normally we want ALL of the record selection formula to be sent to the database or convert into the SQL WHERE clause. It makes your query more efficient. When I have a slow report one of the first things I check is for functions in the selection formula. But it doesn’t help to be efficient if the database can’t handle the rule correctly.
3) I recently had one case where neither of the above options solved the problem. No matter how I wrote the criteria formula, as soon as it was part of the selection formula I would lose records. So we gave up on selection and used suppression instead. This is a last resort. You don’t use the {@Criteria} formula in the selection formula at all. Without the rule the report will include some records that you don’t want. You then suppress the unwanted records with a suppression formula like this:
not {@Criteria}
When you use this method you also have to make sure these records aren’t included in any totals. This means writing a formula that only includes the records that you want and then doing all totals on these formulas. For instance I could write a formula like this:
if {@Criteria}
then {table.Amount}
else 0
If I total this formula it won’t matter that I have suppressed records in the report. I don’t see them and they aren’t in the totals.
You can create Crystal Reports that directly read CSV files using the Access/Excel(DAO) connection. Just keep in mind that CSV files don’t always make an ideal data source. Like spreadsheets, CSV files don’t have set data types for each column. This can cause data type ambiguity which might cause you to lose some data. And in some cases the report will read the first record of the CSV file as the column headings, removing the first record of data from the dataset.
But here CSV files have one advantage over XLS files. CSV files allow you to introduce a Schema.ini file to define the data type for each column in the CSV. This is something you can’t do with spreadsheets. The schema.ini file is a simple text file that sits in the same folder as your CSV. There are many attributes available in schema.ini, but you only need to use the attributes that you need. The other attributes will be set based on defaults stored in the registry. Here are the most common problems I find that can be solved with a schema.ini file.
- The CSV is reading the first row of data as column headings
- Columns read as the wrong data type
- Character column is read as numeric and shows only the numeric values
- The columns are parsed using the wrong character
Here is an example of a schema.ini that defines two different CSV files in the same folder:
[sample1.csv]
Format=CSVDelimited
ColNameHeader=False
[sample2.csv]
Format=CSVDelimited
ColNameHeader=False
Col1=OrderDate date
Col2=Amount long
Col3=CustID text
Col4=CustName text
Col5=CustCategory text
As you can see, a single INI file can define multiple CSV files when they are in the same folder. Each file gets it’s own section of the INI file. Both files are set to be read as comma delimited. Both files are set to NOT treat the first row as column headings. In the first file we allow the driver to name the fields (usually A, B, C, etc) and determine the data type automatically. In the second file we name each column and assign each a data type.
So if you are reporting on a CSV file and running into issues, using a Schema.ini file may help solve the problem.