banner



Which Two Statements Regarding Caching Of Sql Server Reporting Services Reports Are True?

Introduction

In by chats, we take had a look at a myriad of unlike business intelligence techniques that 1 can utilize to turn information into data. In today's "get together" we are going to try to pull all these techniques together, rationalize our development plans, and moreover, expect at some good habits to adopt or for the want of meliorate words utilize SQL Server Reporting Services All-time Practices .

During today'southward word, we shall be touching upon the following issues.

  • Discussing the pros and cons of utilizing shared datasets.
  • The 'pluses' and pitfalls of utilizing embedded reports.
  • Report utilization logging.
  • Utilization of graphics.
  • Caching of information.

Then let the states get to it!

Utilization of datasets (shared vs. embedded)

To refresh our minds every bit to what a dataset is, I prefer to utilize a metaphor.

In 'attaching' any report to the data from the database, one requires two critical components.

  • A data source
  • A data gear up

Imagine that we have a house (our database) that has an outside h2o tap. This water tap volition be used to water five house plants 20 m from the house. What we really need is a water hose (data source) to go the water to the plants. The five pot plants (datasets) are watered with the h2o from the hose. The datasets provide data for our reports, charts, and matrices and these datasets are populated at runtime.

In that location are 2 means that our plants may be watered. I fill each pot separately (create five embedded or local datasets) or utilize a large bucket and fill each pot from the big bucket (shared datasets). Shared datasets are global dataset and may be utilized for all reports within the project. Embedded or local datasets are available exclusively to the current report.

In social club to decide which method of written report data storage is most optimal (for our current needs), we must look at a scenario where both are utilized so ask ourselves some very of import questions.

In the screen dump below, we come across a typical study, (created for a user) that shows the varied goals and performance results for a period of fourth dimension.

A WIG (see the term in the screen dump above) is a "wildly important goal".

Note that we have v shared datasets in the upper correct-hand side of the screen dump. Note also, the datasets in the "Study Information" window on the left-hand side of the screen dump. The play a joke on is to understand how the pieces fit together. Armed with this cognition nosotros can brand some intelligent decisions equally to what type of dataset should be utilized for each of our charts shown in the screen dump above.

Below, we see a data source created to tap data from the WIG database.

The connectedness data is stored within the data source (meet below).

Our shared dataset "WigLeadLagDescriptions" (see to a higher place and to the top correct nether the "Shared Datasets" Folder) is connected ("filled") past our datasource "ServiceGrowth2" (encounter below).

Finally, our "DescriptionStrategy" local / embedded dataset (see beneath and left) derives its information from the shared "WigLeadLagDescription" dataset (run into below to the top right).

Now as this local dataset is exclusively utilized past the Strategy team charts and as the data in the shared dataset contains the descriptions for all the departments, we needed to apply the filter to the local dataset to ensure that the only data inside that dataset, is related solely to strategy. This was achieved as follows:

Stepping dorsum from all this garble, this is how the information distribution was organized looking at it from a "bird'due south eye view".

In short, each local dataset or "embedded" dataset will incorporate a subset of the shared dataset. This subset is obtained by FILTERING the data pulled from the shared dataset. This is achieved by placing a filter constraint in the local dataset definition (see below).

So how do nosotros decide which type of dataset is advisable? Much depends upon how much data will be pulled and Practise YOU REQUIRE it all for each chart and/or matrix within the report.

What I take not disclosed until now is that WIG team horizontal gauges (see below) have an 'activeness' attached to them.

Clicking on ane of these horizontal gauges results in a recursive telephone call to the same report and passes an integer value to the report. Had we clicked upon the Strategy estimate (run into above), the aforementioned report is called and a value of ane is given to a variable called "Toggler" and the name of the solutions area is too passed via the variable SolutionsArea (encounter below).

Now had we clicked upon the "Sales" horizontal estimate, once again a recursive telephone call is executed withal this fourth dimension "Toggler" is set to "2" and the "SolutionsArea" is set to "Sales" (see beneath).

There are 4 more "WIG Teams" in addition to the ii that we just discussed making 6 in total.

The astute reader will note that in the screenshot above that the bottom bar chart has a title "<Expr>" (run across the circle in the screenshot above). What is not immediately apparent is that there in fact 6 bar charts superimposed upon each other (as shown diagrammatically below):

This said, when the user clicks upon the Strategy judge, the "solutions area" variable is set to 'Strategy' and the toggle variable set to i. The trick is that at any in one case but one chart may exist visible and utilizing the visibility property of the charts we are able to show or hide the nautical chart depending upon which "Toggler" value is passed by the user's solutions expanse pick.

To learn more than about how this is technique is handled, delight do have a look at an article that I recently published on SQLShack entitled "At present yous see it, now y'all don't"

For this written report, we chose to utilize embedded or local datasets (local to this report), every bit each dataset is solely utilized and attached to 1 and just one vertical bar chart. The important point is that we take a runtime hitting ONCE by having the individual data sets populated from the shared datasets and these local or embedded datasets are being persisted inside the enshroud. As we saw above, each is filtered for a particular solutions area. One time again, the astute reader will note that it would be most difficult to implement an equivalent and efficient filtering mechanism on the shared information set (particularly should this shared dataset contain hundreds of thousands of records).

Many people say that a picture is worth a thousand words and with regards to the lengthy discussion that we have just had, we tin can summarize information technology all, in the slide shown beneath:

Instead of executing a major table based query or stored procedure six times to populate each of boxes (local datasets) on the left and potentially take to do a table scan or at best and indexed browse on the ALL the table records (looking for these half dozen solutions areas from the myriad of subject areas present in the table), we pull in one case into the "WIG Descriptions" box/dataset on the correct and then populate the left mitt boxes utilizing 6 queries drawn from the subset dataset "WIG Descriptions".

Cases in which shared datasets are the respond

In our next example, nosotros shall be looking at a dashboard that I created for a major financial institution (equally a epitome) a few years dorsum. The dashboard may be seen below:

The dashboard functions with numerous subreports as may be seen to a higher place. Each sub-report has its own shared datasets and NO local nor embedded datasets as may be seen in the screen dump beneath.

Opening the dataset tab on the left, the reader may see that the dataset providing the data originates from the shared dataset "SQLShackYeaarsInBlackIsTwo" (see below).

The of import point to sympathise is that while a local dataset "SQLShackYearsInBlack" is stubbed off, no farther query is defined to pull the data from the database tables and in that location is merely a connectedness made between the "local dataset" (which volition be utilized by the gauge see in a higher place) and the shared dataset.

The differences are fine and subtle. More over the correct determination is vital to insure maximum efficiency in rendering the report results.

Clearing of the report cache

One of the pitfalls or gotcha's (for want of amend words) when it comes to developing our reports within the Visual Studio environment, is the caching of data within the system generated ".rdl.data" file. The nuances of this are fine in that we may make cosmetic changes to the report with regards to the filtering etc. and when nosotros preview the study afterward having made these changes, what is rendered is definitely NOT what we expected to see. This may include changes that nosotros may have made to the stored procedures that feed the datasets. Whilst running them within the SQL Server Management Studio environment the right results are obtained, the instant that your report is run and is rendered, one notes that the results are the "same former, same old". This can be about disconcerting at times.

The reason for this is that our report is sourcing its data from a cached disc file (encounter below).

This Reporting Services generated file must exist removed and regenerated past Reporting Services in order for the changes to exist rendered. A build or rebuild of the project does Non accomplish this!

Jason Faulkner has a super little routine that he wrote that helps usa find a quick and muddy manner of purging this data file.

  • Go to Tools > External Tools
  • Add a new tool and gear up the parameters as follows:
  • Title: Clear Report Data Cache
  • Command: "%WinDir%\System32\cmd.exe"
  • Arguments: /C DEL /S /Q "$(SolutionDir)\*.rdl.data
  • Check options: Employ Output window & Close on exit

http://jasonfaulkner.com/ClearDataCacheBIS.aspx

The "Articulate Written report Data Cache" feature (once constructed) may exist seen in the screen dump beneath:

Report Utilization Logging

How oft have you been asked to create a written report which is then of import that the report is required, "yesterday or sooner" only to find out that the study was used once and never once more? In fact, your Visual Studio Reporting Services project may wait similar to the one shown below:

This projection has a plethora of unused/redundant/obsolete reports. In short, the project should be cleaned upwards and report server refreshed. The tough question is but how do nosotros ascertain which reports are being utilized and which are not.

One manner to obtain these statistics is to construct a quick and dirty SQL Server Reporting Services monitoring application (run into below).

In fact, this is the topic for my side by side article and we shall be going through the steps to create this application / report.

In the interim though, please notation that

  1. The vertical bar chart (above) shows the number of reports executed for the current calendar month.
  2. The matrix (immediately below the vertical bar nautical chart) shows the execution times etc. for all reports that were run this calendar month.
  3. The "Peak 5 Reports" matrix shows the 5 most executed report during the electric current month.
  4. The "Unused Reports" feature is what is of interest to us in our present exercise and this will tell us which reports have non been used for some time and are therefore candidates for removal and make clean up.

The apply of graphics within reports

Let us ask ourselves the question "Why should nosotros consider the use of graphics every bit a best practise?" If nosotros are honest with ourselves, the following points should come to listen.

  • Graphics allow the decision maker to glean information in a minimal amount of time.
  • Anomalies become immediately apparent.
  • No determination maker has the time to sift through reams of data.
  • A moving picture is worth 1000 words. In brusk: Information, Not information is required.

In the screen dump shown below, the financial results for SQLShackFinancial's past few years are shown. In the first screen dump, the vertical bar nautical chart's make full color is generated by the organization. In our instance the colour blue. While the vertical bar chart does evidence income increasing with fourth dimension, it does not tell the states anything about how the "actuals" compare with the "planned".

In our next screen dump, which is generated from the same query and from the aforementioned SQL Server Reporting Services project, the data becomes more informative. Note the usage of "our selected colours" to tell us the whole story (see below).

Upward until July 2007, we were doing extremely poorly Still business started to 'plough around' going forrard.

The business rules for the colour fill for the vertical bars were divers as follows.

For each month

Amount <$500000000 >= $500000000 and
< $1000000000
>= $1000000000
Fill color Red Yellow Greenish

In terms of coding:

Conclusions

SQL Server Reporting Services has its quirks and may often seem a difficult tool to utilize to obtain efficient and constructive information from our data. Knowing how to all-time work with the production is of import as are the means to work around the varied "gotcha's".

Datasets are the primal to efficient rendering of reports. Local datasets help us when nosotros are taking a "narrow view" of the data. "Bear witness me the one record out of 100,000,000 that must be shown for the title of my vertical bar chart".

The shared dataset is more conducive to environments where all the extracted data will be utilized by all the matrices and charts, where filtering is done on initial data extraction from the table and the resulting data set contains all that is needed.

As everything else in life generalizations are never 100% true.

Happy programming!


  • Author
  • Recent Posts

Source: https://www.sqlshack.com/sql-server-reporting-services-best-practices/

Posted by: yonyoublicut.blogspot.com

0 Response to "Which Two Statements Regarding Caching Of Sql Server Reporting Services Reports Are True?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel