Which of the following represent the information Granularities in an organization?

Time Granularity

Jérôme Euzenat, Angelo Montanari, in Foundations of Artificial Intelligence, 2005

3.6.3 Temporal databases

Time granularity is a long-standing issue in the area of temporal databases (see Chapter 14). As an evidence of the relevance of the notion of time granularity, the database community has released a “glossary of time granularity concepts” [Bettini et al., 1998a]. As we already pointed out, the set-theoretic formalization of granularity (see Section 3.3) has been settled in the database context. Moreover, besides theoretical advances, the database community contributed some meaningful applications of time granularity. As an example, in [Bettini et al., 1998b] Bettini et al. design an architecture for dealing with granularity in federated databases involving various granularities. This work takes advantage of extra information about the database design assumptions in order to characterize the required transformations. The resulting framework is certainly less general than the set-theoretic formalization of time granularity reported in Section 3.3, but it brings granularity to concrete databases applications. Time granularity has also been applied to data mining procedures, namely, to procedures that look for repeating collection of events in federated databases [Bettini et al., 1998d] by solving simple temporal reasoning problems involving time granularities (see Section 3.3). An up-to-date account of the system is given in [Bettini et al., 2003].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1574652605800057

Data Quality

Jan L. Harrington, in Relational Database Design and Implementation (Fourth Edition), 2016

Inconsistent Granularity

Granularity is the level of detail at which data are stored in a database. When the same data are represented in multiple databases, the granularity may differ. As an example, consider the following tables:

Which of the following represent the information Granularities in an organization?

Both tables contain a cost attribute, but the meaning and use of the columns are different. The first relation is used in the sales database and includes details about how many of each item were ordered and the amount actually charged for each item (which may vary from what is in the items table). The second is used by marketing. Its cost attribute is actually the line cost (quantity * cost from the sales database). Two attributes with the same name therefore have different granularities. Any attempt to combine or compare values in the two attributes will be meaningless.

This type of problem needs to be handled at an organizational level rather than at the single database level. See the last part of this section for details.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128043998000259

Photographic Processes and Materials

P.S. Vincett, M.R.V. Sahyun, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

VI.A.3 Granularity

Granularity is a measure of the noise content of an image. The term comes from the fact that in conventional photography a high noise content image appears grainy to the viewer. Zero granularity is, of course, impossible. Consider a finite number of photons falling on an array of detectors. They will be randomly distributed among the detectors, even if the exposure is uniform. The distribution can be described by Poisson statistics. Thus if N photons on average impact each detector, the standard deviation in the exposure experienced by the detectors in the array will be N0.5. This variability gives rise to what is known as “shot noise” or “quantum mottle.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012227410500569X

Automatic Temporal Layout Mechanisms

M. Cecelia Buchanan, Polle T. Zellweger, in Readings in Multimedia Computing and Networking, 2002

Granularity

Granularity specifies where temporal relationships can be placed: between points in time, between temporal intervals, or both. A point can be a range of things: an absolute time, a relative time during the document's presentation (i.e., 10 seconds after the start of the document, halfway through a media segment); a predictable or unpredictable event in a media segment; an external event that can be passed through to the system (i.e., user interaction); or a composite point, which is any temporal relationship that produces a specific point in time (see below). An interval can be an entire media segment; a portion of a media segment (i.e., the first scene in a video); or a composite interval, which is any temporal relationship that produces an interval of time.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558606517501595

Service-Oriented Architecture

James McGovern, ... Sunil Mathew, in Java Web Services Architecture, 2003

Multi-Grained Methods

The granularity of the methods within a service is of equal or greater importance than the granularity of the service itself. Using the previous bank account example, consider the retrieval of account holder information from the bank account service. There are several ways to implement this interface:

A method in BankAccountService called GetAccountHolder that returns only account-holder information and not the address

Two methods in BankAccountService, called GetAccountHolder and GetAccountHolderAddress; GetAccountHolder would not return address information

A method in BankAccountService called GetAccountHolder that could return both the name and address of the account holder

A method in BankAccountService called GetAccountHolder that could have a switch that tells the service whether to return address information as well as account-holder information

A method in BankAccountService called GetAccountHolder that could accept a list of attributes it wants the service to return; the consumer can choose to get the address by adding the address attributes to the attribute list it passes in to the service

Let's examine these options and their consequences. As Figure 2.10 shows, BankAccountService returns just account-holder information.

Which of the following represent the information Granularities in an organization?

Figure 2.10. A method that returns only account-holder information.

This scenario works well if the consumer needs only account-holder information. But a consumer who needs address information as well is out of luck. The address information could be retrieved from the service by adding a GetAccountHolderAddress method, as illustrated in Figure 2.11.

Which of the following represent the information Granularities in an organization?

Figure 2.11. A method that returns both the account-holder's information and address.

This solves the problem of retrieving address information, but if most of the consumers need address information, more trips are necessary. Having the GetAccountHolder method return both account-holder information and address information in one call would improve performance and reduce the work necessary for the consumer to assemble the two results.

Figure 2.12 illustrates this scenario.

Which of the following represent the information Granularities in an organization?

Figure 2.12. A method that returns either the account-holder's information or address.

This solution works well for consumers who always retrieve address information, but if they almost never need this information, more data than necessary will travel across the network. It will also take longer for service consumers to extract the account-holder data they need from the larger message.

Another solution is to pass in an argument that directs the service whether to return address information. A BankAccountService would have only one GetAccountHolder method. The developer would add an additional argument to the method, to instruct the service whether to return address information as well. Consumers who need only account-holder information could pass in the proper switch to retrieve it. Users who need address information as well could pass in the proper switch to retrieve both.

But what if consumers need only zip codes for all account holders? They would have to retrieve both account-holder information and address information and extract zip codes from a very large message. What if consumers pass in the list of attributes in which they're interested?

This sophisticated alternative implements an interface that accepts a list of attributes to return to the consumer. Instead of sending the account number and an address indicator, consumers submit a list of all of the attributes to return. The list may contain just first and last names or may include all or portions of the address data, such as city and street address. The service would interpret this list and construct the response to consumers to include only the data requested. This solution minimizes both the number of trips consumers make to the service and the amount of data that must travel the network for each request. Figure 2.13 illustrates this option.

Which of the following represent the information Granularities in an organization?

Figure 2.13. A method that returns just the attributes requested.

This approach has two downsides. The first is that the request message will be larger than any of the previous solutions, because the consumer must send the request data as well as the data map on each request. If all service consumers need the exact same data from the service, this solution would perform worse than the previously discussed alternatives.

Second, this solution is also more complex to implement for service developers, and service consumers might find the interface more difficult to understand and use. To alleviate this problem, a service proxy could wrap the complexities of the service interface and provide a simple interface for consumers. A consumer would use multiple distinct and simple service methods on the proxy. The methods map to the way the consumer wants to use the service. The proxy would internally map these multiple methods into a single service-request interface format that accepts a map of data to return. The advantage of this technique is that it allows the service to support any granularity, while providing specific granularities to consumers based on their domain understanding.

If these implementations are not possible, it is always better to return more data, to minimize network round trips, because future clients are likely to need the data. It is also possible to implement several of these options, to solve the needs of multiple consumers. However, this increases the effort to maintain the service and also detracts somewhat from the service's modular understandability.

A service's ability to have multi-grained methods that return the appropriate amount of data is important to reduce network traffic. Extra network traffic is due either to excessive unnecessary data or to a large number of requests to get data.

Granularity is a difficult problem to reconcile when designing service interfaces. It is important to understand the options and implement the most appropriate interface. In the past, arguments surrounding service interfaces have focused mainly on determining the right granularity. Services actually require the designer to find the right granularities for service consumers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558609006500051

How SDN Works

Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017

4.3.5 Scaling the Number of Flows

The granularity of flow definitions will generally be more fine as the device holding them approaches the edge of the network and more general the more the device approaches the core. At the edge, flows will permit different policies to be applied to individual users and even different traffic types of the same user. This will imply in some cases multiple flow entries for a single user. This level of flow granularity would simply not scale if it were applied closer to the network core, where large switches deal with the traffic for tens of thousands of users simultaneously. In those core devices, the flow definitions will be generally more coarse, with a singe aggregated flow entry matching the traffic from a large number of users whose traffic is aggregated in some way such as a tunnel, a VLAN or a MPLS LSP. Policies applied deep into the network will likely not be user-centric policies but policies that apply to these aggregated flows. One positive result of this is that there will not be an explosion of the number of flow entries in the core switches due to handling the traffic emanating from thousands of flows in edge switches.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045558000041

Data Vault 2.0 Modeling

Daniel Linstedt, Michael Olschimke, in Building a Scalable Data Warehouse with Data Vault 2.0, 2016

The granularity of links is defined by the number of hubs that they connect. Every time a new hub is added to a link, a new level of grain is introduced. The more hubs a link connects to, the finer the granularity becomes. When we add a new hub, we lower the granularity of the link. In this way, links behave like facts in a dimensional model. When a new dimension is added to a fact table, the granularity is lowered as well.

If the Data Vault link is already in production and the business requires a change of grain, there are two options for refactoring the Data Vault model. First, we could modify the existing link and add a reference to another hub. However, this would require re-engineering existing ETL jobs and the way that historic data is handled in the new link must also be determined (which has a different grain). It also puts the auditability of the data at risk, because the source system never delivered a NULL value (Figure 4.19).

Which of the following represent the information Granularities in an organization?

Figure 4.19. Manipulating historic data in a Data Vault link.

For this reason, the modification of existing link structures is no longer an acceptable practice in Data Vault 2.0 modeling.

Note that, in Figure 4.19, every reference to other business objects (hubs) is provided by a pseudo hash key and the business key in curly brackets (e.g., “8fe9… {UA}”).

The better option is creating a new link for new incoming data and “closing” the old link. Closing the link means that no new data is added to the link table. Instead, new data is added to the new link. When the new business builds the information mart, it must define how the links with different grains are to be merged. By doing so, we can ensure the auditability of the incoming data and meet the requirements of the business.

The SQL statement in the middle of Figure 4.20 represents a business rule that describes how to handle old data. For example, the rule specifies that the airport in the old data be set to unknown, as represented by the hash key 0 (simplified). The resulting table is used to create a Business Vault table, as is discussed in Chapter 14, Loading the Dimensional Information Mart, or a dimension in the information mart.

Which of the following represent the information Granularities in an organization?

Figure 4.20. Merging historic data (upper left) with new data (upper right) from individual Data Vault links.

The second option also works when a level of grain needs to be removed (a reference to a hub) (Figure 4.21).

Which of the following represent the information Granularities in an organization?

Figure 4.21. Removing historic data in a Data Vault link.

If we simply removed the Airplane column from the old link, we would lose the data due to the removal of the reference from the link. This is the worst case for an auditable data warehouse. The data reduction works similarly to a GROUP BY statement without any measures.

Figure 4.22 shows the reduction of a level of grain. Note the SELECT DISTINCT in the SQL statement.

Which of the following represent the information Granularities in an organization?

Figure 4.22. Merging historic data (upper left) with new data (upper right) from individual Data Vault links.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128025109000040

Operating Systems Overview

Peter Barry, Patrick Crowley, in Modern Embedded Computing, 2012

Time of Day

Most operating systems provide time of day services. This is the real-world date and time of the platform. The operating system can maintain this in one of two ways. The first is to read a Real-Time Clock (RTC). A RTC is a hardware timer that is typically backed up by a battery and is maintained even when the platform power is removed. The RTC provides date, time, and alarm services. The second option is where the time is maintained by the CPU while the CPU is running. In this case the operating system uses a timer interrupt (perhaps the operating system tick timer) to count interrupts and increment the time. The time and date value must be seeded when the platform first starts-ups as it is not retained when no power is applied. In embedded systems, the initial time may be seeded using the RTC mentioned above or it may be acquired from an external source. A platform can acquire time from many external sources; the most frequent method is to obtain the time from a network source using Network Time Protocol (NTP). This protocol allows a network time server to provide time updates to clients with very high precision. The NTP is used not only to seed the initial time on a platform but also to ensure that an accurate time is mainlined over time (bearing in mind that the local oscillators on an embedded platform may not be very accurate.). In mobile systems, a very accurate time may be obtained from the cellular radio network.

The concept of time on Unix/Linux systems is represented by a counter that counts seconds since the epoch. The epoch is defined (by POSIX 2008 standard) as midnight Coordinated Universal Time (UTC) of January 1, 1970. The POSIX function time_t time(time_t ∗tloc) returns a type time_t, which represents the number of seconds since the epoch. Most operating systems provide functions to convert this value into other forms, such as the function asctime(), which converts the time in seconds to a string.

The granularity of time offered by the time() API is seconds. In many cases you will require a more precise copy of time, such as microseconds. The POSIX standard defines the following APIs to address this requirement:

clock_getres() returns the resolution for a particular timer source, for example, the CLOCK_REALTIME (system time).

clock_gettime()/clock_setime() get and set the time source specified in the API (for example, the real-time clock).

These calls both indicate the resolution and return the time to the appropriate resolution. At the time of writing, Linux 3.x reported a resolution of 1 ns, as shown by the code in Figure 7.11.

Which of the following represent the information Granularities in an organization?

FIGURE 7.11. Get Resolution of Time on Platform.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123914903000072

5G flexible scheduling

Yanan Lin, ... Zhenshan Zhao, in 5G NR and Enhancements, 2022

5.1.2.2 Increase granularity of frequency-domain resource allocation

The granularity size is determined mainly based on the signaling overhead and the complexity of resource allocation. A large granularity can be used to alleviate a significant increase in the number of resource particles caused by the increase of bandwidth. On one hand, the larger Subcarrier Spacing (SCS) adopted in 5G NR can increase the absolute frequency-domain size of one PRB. For example, 5G NR PRB is composed of 12 subcarriers as in LTE, but with 30 kHz SCS, the size of a PRB in the frequency domain is 360 kHz, which is double the size in LTE. On the other hand, 5G NR can use a larger Resource Block Group (RBG) to achieve a scheduling with more PRBs, which will be described in Section 5.2.3.

It should be noted here that the concept of PRB in 5G NR specification is different from LTE. In LTE specification, PRB is a time–frequency, 2D concept. One PRB refers to a rectangle resource block over 12 subcarriers×7 symbols (assuming normal Cyclic Prefix (CP)) and includes 84 Resource Elements (REs), where 1 RE equals to 1 subcarrier×1 symbol. To describe the number of time–frequency resources in LTE, the number of PRBs is sufficient. But in NR specification, due to the introduction of the symbol-based flexible time-domain resource indication method (see Section 5.2.5), it is not appropriate to define a single resource allocation unit over both frequency domain and time domain. A PRB in NR specification, which contains 12 subcarriers, is completely a frequency-domain concept and does not have any time-domain meaning. Time–frequency resource scheduling cannot be described only by PRB, but needs to use “the number of PRBs+the number of slot/symbols.” This is a big conceptual difference between 5G NR and LTE, and therefore demands attention.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323910606000052

Schema Assembly and Reuse

James Bean, in SOA and Web Services Interface Design, 2010

12.1.2 Interface schema granularity

The granularity of a schema declaration can influence the degree to which a schema will be reused. Schemas that represent common enterprise semantics or context are typically subject to a higher potential for reuse. Similarly, fine-grained schema declarations such as data types (simpleType) and element definitions tend to experience a higher level of reuse (see Figure 12.6). That is, a data type can be applied to many different elements of a service interface and across different services. A schema-defined structure (complexType) also presents an opportunity for reuse. However, as a coarse-grained structure, there may be fewer potential reuse opportunities.

Which of the following represent the information Granularities in an organization?

Figure 12.6. Reuse Granularity

Combining reusable declarations of varying granularity can be very effective within a single and well-defined context. An enterprise standard schema for a Postal Address might include combinations of a complexType structure, along with globally declared elements and simpleTypes. Developing a schema to define a Postal Address structure and also including individual elements for City, State, Postal Code, and Country can be highly effective. However, a schema that includes declarations of mixed granularity and inconsistent scope is less likely to be reused.

Another consideration is the amount of overhead from potential reuse. A schema that is designed to include a moderate number of elements and data type definitions (simpleType) will have a higher potential for reuse than a schema with an extremely high number of element and data type declarations. If the reusable schema includes a vast number of declarations, the interface designer for a new service might determine that the schema is too heavyweight for reuse. Although subjective, the determination of “heavyweight” reflects the number of declarations or how well an XML Schema parser might or might not optimize across the declarations that are unnecessary (e.g., as potential run-time validation latency).

An enterprise standard schema that represents a common code set is one of the best models to consider (see Figure 12.7). An enterprise standard schema that includes an element and a separate data type (simpleType) for the North American Region with enumeration values of “USA”, “Canada”, and “Mexico”, is moderate to fine-grained, lightweight, easy to reference, and does not include a high number of unrelated declarations that might remain unused. Once the schema is referenced from the overall interface schema, the internal simpleType would be inherited as the referenced “type” value for element or attribute declarations. The interface designer has the option to also inherit the “North American Country” (“< NorthAmericanCountry/ >”) element, which also references the same data type.

Which of the following represent the information Granularities in an organization?

Figure 12.7. Example Reusable Enterprise Schema

With this example, the granularity of the reusable North American Country schema is mixed. It would contain both a global simpleType with enumerations and a separate global element declaration. Both of which were available to other referencing schemas as needed. The scope and context of the schema are also consistent (specific to a North American Country element and values), which simplifies later potential for reuse. If the schema were to also include declarations for nonlocation or nongeographic elements such as “Item ID” and “Account Balance”, even though they are also standard and reusable elements, they are of completely different contexts and semantics from that of location and geography. This example of inconsistent scope and context complicates the schema and would most likely add element declarations that are “overhead” for potential reusers.

Another example of potential reuse might be an enterprise standard Postal Address structure (see Example 12.1), represented as a complexType declaration. The nested child elements of the address structure (Address Line, City, State, Postal Code, etc.) are also defined in the same schema as separately declared global elements. All of the schema declarations are of the same scope and context (a postal address). However, reusers of the schema have the ability to choose which of the declarations can be used by their interface. The elements can be referenced and reused separate from the structure if needed.

Example 12.1 Example Reusable Postal Address Schema

&lt;?xml version=“1.0” encoding=“UTF-8” ?&gt;

- &lt;xs:schema xmlns:xs=“http://www.w3.org/2001/XMLSchema”&gt;

&lt;xs:element name=“PostalAddress” type=“PostalAddressType”/&gt;

- &lt;xs:complexType name=“PostalAddressType”&gt;

- &lt;xs:sequence&gt;

&lt;xs:element ref=“AddressTo” minOccurs=“1” maxOccurs=“1” /&gt;

&lt;xs:element ref=“AddressLine” minOccurs=“1” maxOccurs=“4” /&gt;

&lt;xs:element ref=“City” minOccurs=“1” maxOccurs=“1” /&gt;

&lt;xs:element ref=“State” minOccurs=“1” maxOccurs=“1” /&gt;

&lt;xs:element ref=“County” minOccurs=“1” maxOccurs=“1” /&gt;

&lt;xs:element ref=“PostalCode” minOccurs=“1” maxOccurs=“1” /&gt;

&lt;xs:element ref=“Country” minOccurs=“1” maxOccurs=“1” /&gt;

&lt;/xs:sequence&gt;

&lt;/xs:complexType&gt;

&lt;xs:element name=“AddressTo” type=“AddressLineType” /&gt;

&lt;xs:element name=“AddressLine” type=“AddressLineType” /&gt;

&lt;xs:element name=“City” /&gt;

&lt;xs:element name=“State” /&gt;

&lt;xs:element name=“County” /&gt;

&lt;xs:element name=“PostalCode” /&gt;

&lt;xs:element name=“Country” /&gt;

- &lt;xs:simpleType name=“AddressLineType”&gt;

- &lt;xs:restriction base=“xs:string”&gt;

&lt;xs:maxLength value=“40” /&gt;

&lt;/xs:restriction&gt;

&lt;/xs:simpleType&gt;

&lt;/xs:schema&gt;

Another reusable schema example might represent a combination of several well-defined and standard data type declarations (see Example 12.2). While these types might not be of the same specific context, they are scoped to represent only data types. The example schema does not include structures or elements. This type of schema includes very fine-grained declarations that have broad utility across lines of business, applications, and functional areas. The potential for reuse is high. Alternatively, if the designer had decided to include all possible data types from across the enterprise in this one schema and regardless of context, the potential for reuse by other interface schemas might diminish. That schema could become overly complex and difficult to manage.

Example 12.2 Example Reusable Type Schema

&lt;?xml version=“1.0” encoding=“UTF-8” ?&gt;

- &lt;xs:schema xmlns:xs=“http://www.w3.org/2001/XMLSchema”&gt;

- &lt;xs:simpleType name=“MonetaryAmountType”&gt;

- &lt;xs:restriction base=“xs:decimal”&gt;

&lt;xs:totalDigits value=“11” /&gt;

&lt;xs:fractionDigits value=“2” /&gt;

&lt;/xs:restriction&gt;

&lt;/xs:simpleType&gt;

- &lt;xs:simpleType name=“CreditAmountType”&gt;

- &lt;xs:restriction base=“xs:decimal”&gt;

&lt;xs:totalDigits value=“11” /&gt;

&lt;xs:fractionDigits value=“2” /&gt;

&lt;xs:minInclusive value=“0.00” /&gt;

&lt;/xs:restriction&gt;

&lt;/xs:simpleType&gt;

- &lt;xs:simpleType name=“DebitAmountType”&gt;

- &lt;xs:restriction base=“xs:decimal”&gt;

&lt;xs:totalDigits value=“11” /&gt;

&lt;xs:fractionDigits value=“2” /&gt;

&lt;xs:maxInclusive value=“0.00” /&gt;

&lt;/xs:restriction&gt;

&lt;/xs:simpleType&gt;

&lt;/xs:schema&gt;

While there is no single rule or pattern that applies across all potential reuse opportunities, granularity plays a significant role. If the interface schema is designed to represent an abstract set of XML Schema declarations that are not obvious as to intent for reuse, other service interface designers will avoid using it. Alternatively, if the schema appears to be overly coarse-grained, other interface designers will view it as heavyweight and avoid using it. The balance is a subjective but important consideration. Fine-grained schema declarations tend to exhibit a higher degree of potential reuse as a number of reuse instances. Consistency of scope and context for a reusable schema also play a role. Where the schema declarations have an inconsistent scope or they cross over different information contexts, the complexity of using the schema increases. The result is a heavyweight schema that is so complex that it becomes cumbersome to reuse.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123748911000125

Which of the following represents the data levels in an organization?

Which of the following represent the information levels in an organization? individual, department, and enterprise. Which of the following represent the information formats in an organization? document, presentation, and database.

Which of the following represents the different data Granularities?

Which of the following represents the different information granularities? The different information granularities include detail, summary, and aggregate.

Which of the following represent the five common characteristics of high quality information?

Accuracy. As the name implies, this data quality characteristic means that information is correct. ... .
Completeness. “Completeness” refers to how comprehensive the information is. ... .
Reliability. ... .
Relevance. ... .
Timeliness..

Which of the following are examples of transactional information?

Transactional data describe an internal or external event or transaction that takes place as an organization conducts its business. Examples include sales orders, invoices, purchase orders, shipping documents, pass- port applications, credit card payments, and insurance claims.

Which of the following is an example of analytical information?

Examples of analytical information are: Trends (information about where the particular market is heading and if the organzation should follow the trend) Sales (information about if the organization needs to pick up sales in a particular area or if it should cut back on inventory of specific products)

Which of the following types of information can be found in a database group of answer choices inventory transactions employees all of the above?

A database maintains information on inventory, transactions, and employees.