Category
Function
Displays an image or renders a scene and displays an image.
Syntax
where = Display(object, camera, where, throttle);
Inputs
Name | Type | Default | Description |
---|---|---|---|
object | object | none | object to render or image to display |
camera | camera | no default | camera if rendering is required |
where | window or string | the user's terminal | host and window for display |
throttle | scalar | 0 | minimum time between image frames (in seconds) |
Outputs
Name | Type | Description |
---|---|---|
where | window | window identifier for Display window |
Functional Details
object |
is the object to be displayed or to be rendered and displayed. |
camera |
is the camera to be used to render object. If camera is not specified, the system assumes that object is an image to be displayed (e.g., the output of the Render module). Note: A transformed camera cannot be used for this parameter. |
where |
specifies the host and window for displaying the image. On a workstation, the format of the parameter string is: If you are using SuperviseState or SuperviseWindow to control user interactions in the Display window, then where should be set with the where output of SuperviseWindow. Note: If you are using the where parameter, it is important to set its value before the first execution of Display. |
throttle |
specifies a minimum interval between successive image displays. The default is 0 (no delay). |
where |
The output can be used, for example, by ReadImageWindow to retrieve the pixels of an image after Display has run. |
Notes:
If you are using Display without a camera to simply display an image, you can increase or decrease the resolution of the image by using Refine or Reduce, respectively, on the image before passing it to Display (see Refine and Reduce).
The Arrange module can be used before Display to lay out images side by side, or one above the other (see Arrange).
If you specify the delayed parameter as 1 to any of the coloring modules, they will automatically perform this "copy" of the "data" component, and will attach a "color map" or "opacity map" component which contains 256 RGB colors, or 256 opacities. If you already have a color or opacity map, either imported or created using the Colormap Editor, and wish to use delayed colors or delayed opacities, you can pass your color map or opacity map to the Color module as the color or opacity parameter, and set the delayed parameter to Color as 1.
The structure of a color map or opacity map is described in Color. The Colormap Editor produces as its two outputs well-formed color maps and opacity maps. Alternatively, if you already have a simple list of 3-vectors or list of scalar values, and want to create a color map or opacity map, you can do this using Construct. The first parameter to Construct should be [0], the second should be [1], and the third should be 256. This will create a "positions" component with positions from 0 to 255. The last parameter to Construct should be your list of 256 colors or opacities.
If you are reading a stored image using ReadImage, and the image is stored with a colormap, you can specify that the image should be stored internally in Data Explorer with delayed colors by using the delayed parameter to ReadImage.
You can also convert an image (or object) to a delayed colors version by using QuantizeImage.
If you are using delayed colors (see "Delayed Colors and Opacities (Color and Opacity Lookup Tables)" and ReadImage) and displaying images directly (i.e. you are not providing a camera), Display will use the provided color map directly instead of dithering the image. (Depending on your X server, you may need to use the mouse to select the Image or Display window in order for the correct color to appear.) If you do not want Display to use the color map directly, use the Options module to set a "direct color map" attribute with a value of 0 (zero).
Attribute Name | Value | Description |
---|---|---|
direct color map | 0 or 1 | whether or not to use a direct color map |
When displaying non-delayed color images in 8-bit windows, Display assumes that it can specify 225 individual colors. If this number is not currently available in the shared color map, Display will find the best approximations available. However, this may lead to a visible degradation of image quality. Display may instead use a private color map. This decision is based on the worst-case approximation that it must use with the default color map. If this approximation exceeds a threshold, a private color map will be used. The approximation quality is measured as Euclidean distance between the desired color and the best approximation for that color, in an RGB unit cube.
An environment variable, DX8BITCMAP, sets the level at which the change to using a private color map is made. The value of DX8BITCMAP should be a number between 0 (zero) and 1 (one), and it represents the Euclidean distance in RGB color space, normalized to 1, for the maximum allowed discrepancy. If you set DX8BITCMAP to 1, then a private color map will never be used. On the other hand, if you set DX8BITCMAP to -1, then a private color map will always be used. The default is 0.1. See also the -8bitcmap command line option for Data Explorer in Table 5 in IBM Visualization Data Explorer User's Guide.
Displayed images generated by Display or Image are gamma corrected. Gamma correction adjusts for the fact that for many display devices a doubling of the digital value of an image's brightness does not necessarily produce a doubling of the actual screen brightness. Thus, before displaying to the screen, the pixel values are adjusted non-linearly to produce a more accurate appearance.
The environment variables DXGAMMA_8BIT, DXGAMMA_12BIT, and DXGAMMA_24BIT are used to specify values for gamma of 8-, 12-, and 24-bit windows, respectively. If the appropriate DXGAMMA_nBIT environment variable is not set, the value of the environment variable DXGAMMA will be used if one is defined. Otherwise, the module uses the system default, which depends on the machine architecture and window depth. This default is always 2 (two) except for 8-bit sgi windows, for which it is 1 (one). Note that the default depends on the machine on which the software renderer is running, not on the machine that displays the image.
If you wish to render a displayed image at a higher resolution (for example to write to an output file), you can usually simply use Render on the same object as object, with a new camera (see AutoCamera or Camera). However, if object contains screen objects (captions and color bars), the new image will not be WYSIWYG (What You See Is What You Get), with respect to the displayed image, because the sizes of captions and color bars are specified in pixels rather than in screen-relative units. The ScaleScreen module (see ScaleScreen) allows you to modify the size of screen objects before rendering.
When given a camera input, the Display module (or Image tool) caches rendered images by default. The result is faster redisplay if the same object and camera are later passed to the module.
To turn off this automatic caching, use the Options module to attach a "cache" attribute (set to 0) to object.
It is important to remember that this caching is separate from the caching of module outputs, which is controlled by the -cache command-line option to dx.
You can change the rendering properties of an object by using the Options module. The following table lists the shading attributes that can be set by the Options module for interpretation by the Display tool. (See the section on surface shading in IBM Visualization Data Explorer Programmer's Reference for more information.)
Attribute | Type | Default | Description |
---|---|---|---|
"ambient" | scalar | 1 | coefficient of ambient light ka |
"diffuse" | scalar | .7 | coefficient of diffuse reflection kd |
"specular" | scalar | .5 | coefficient of specular reflection ks |
"shininess" | integer | 10 | exponent of specular reflection sp |
As a rule of thumb, except for purposes of special effects, ka should be 1 and kd + ks should be about 1. The larger ks, the brighter the highlight, and the larger e, the sharper the highlight. The Shade module provides a shortcut for setting rendering properties.
The attributes listed above apply to both the front and back of an object. In addition, for each attribute "x" there is also a "front x" and a "back x" attribute that applies only to the front and back of the surface, respectively. So, for example, to disable specular reflections from the back surfaces of an object, use the Options module to set the "back specular" attribute of the object to 0.
The determination of which faces are "front" and which are "back" depends on the way in which the "connections" component of the faces is defined. "Front colors" applies to clockwise faces, and "back colors" applies to counterclockwise faces.
The volume renderer interprets colors and opacities as values per unit distance. Thus the amount of color and degree of attenuation seen in an image object is determined in part by the extent of the object's volume. The Color, AutoColor, and AutoGrayScale modules attach "color multiplier" and "opacity multiplier" attributes to the object so that colors and opacities will be appropriate to the volume, while maintaining "color" and "opacity" components that range from 0 to 1 (so that objects derived from the colored volume, such as glyphs and boundaries, are colored correctly). See "Rendering Model" in IBM Visualization Data Explorer Programmer's Reference.
These attributes adjust the colors and opacities to values that should be "appropriate" for the object being colored. However, if the simple heuristics used by these modules to compute the attribute values are not producing the desired colors and opacities, you have two alternatives.
Only the first of these methods should be used for "delayed" colors.
Finally, if you color a group of volumes and the resulting image is black, the reason is that the current renderer does not support coincident volumes.
Attribute | Type | Description |
---|---|---|
color multiplier | scalar | Multiplies values in the "color" component |
opacity multiplier | scalar | Multiplies values in the "opacity" component |
Objects are shaded when rendered only if a "normals" component is present. Many modules (e.g. Isosurface) automatically add "normals", but the FaceNormals, Normals, and Shade modules can also be used to add normals. Even if an object has "normals", shading can be disabled by adding a shade with a value of 0 (the Shade module can do this).
Attribute Name | Values | Description |
---|---|---|
shade | 0 or 1 | used to specify whether or not to shade when normals are present |
Object fuzz is a method of resolving conflicts between objects at the same distance from the camera. For example, it may be desirable to define a set of lines coincident with a plane. Normally it will be unclear which object is to be displayed in front. In addition, single-pixel lines are inherently inaccurate (i.e. they deviate from the actual geometric line) by as much as one-half pixel; when displayed against a sloping surface, this x or y inaccuracy is equivalent to a z inaccuracy related to the slope of the surface. The "fuzz" attribute specifies a z value that will be added to the object before it is compared with other objects in the scene, thus resolving this problem. The fuzz value is specified in pixels. For example, a fuzz value of one pixel can compensate for the described half-pixel inaccuracy when the line is displayed against a surface with a slope of two.
Attribute | Type | Description |
---|---|---|
fuzz | scalar | object fuzz |
To add fuzz to an object, pass the object through the Options module, specifying the attribute as fuzz and the value of the attribute as the number of pixels (typically a small integer).
Hardware rendered images can be made to anti-alias lines, or draw lines as multiple pixels wide. Note that these options are not available in software rendering. To specify anti-aliasing of lines, use the Options module to set an attribute on the object passed to Display of antialias with the value of "lines". To specify multiple pixel width lines, use the Options module to set an attribute of line width with a value of the number of pixels wide you want the line to be.
Attribute | Values | Description |
---|---|---|
antialias | "lines" | causes lines to be anti-aliased |
line width | n | causes lines to be drawn with a width of n pixels |
Data Explorer provides access to the hardware accelerators on the workstation, in addition to the default software rendering techniques. The hardware enhancements are available only on workstations that are equipped with 3-D graphic adapters. On systems without such adapters, only the software rendering options are available. This enhancement is intended to provide increased interactivity, especially in operations that involve only the rendering process.
Data Explorer can also provide accelerated rendering by approximating the rendering using points, lines, and opaque surfaces. Such geometric elements are often sufficient to approximate the appearance of the desired image, and thus are useful for preliminary visualizations of the data.
The approximations fall into three main categories: bounding box, dots, and wireframe. Wireframe is available only as a hardware rendering technique.
If you are using the graphical user interface and the Image tool, you can access the rendering options by using the Rendering Options option on the Options pull-down menu in the Image window. This option invokes a dialog box that allows you to set the rendering approximations for continuous and one-time execution. (For more information, see "Rendering Options..." in IBM Visualization Data Explorer User's Guide.)
If you are not using the Image tool, then you must use the Options module to set various attributes that control the rendering approximations. The following table lists the attributes that control rendering approximations, together with the permissible values for each attribute.
Attribute Name | Values | Description |
---|---|---|
"rendering mode" | "software" "hardware" |
use software rendering use hardware rendering |
"rendering approximation" | "none" "box" "dots" "wireframe" | complete rendering object bounding box only dot approximation to object wireframe approximation to object |
"render every" | n | render every nth primitive render everything (default) |
Note: If you do not pass a camera to Display (i.e., if object is already an image), Display will always use software to display the image, regardless of the setting of any rendering options using the Options tool.
If the machine on which Data Explorer is running supports OpenGL or GL, then texture mapping is available using hardware rendering. Texture mapping is the process of mapping an image (a field with 2-dimensional positions, quad connections, and colors) onto a geometry field with 2-dimensional connections and, typically, 3-dimensional positions (e.g., a color image mapped onto a rubbersheeted height field). The advantage of texture mapping over the use of Map, for example, is that the resulting image may have much greater resolution than the height map.
The geometry field must have 2-D connections (triangles or quads) and must also have a component, with the name "uv," that is dependent on positions and provides the mapping between the image and the positions of the geometry field. This component consists of 2-vectors. The origin of the image will be mapped to the uv value [0 0], and the opposite corner to the uv value [1 1].
The texture map should be attached to the geometry field as an attribute, with the attribute name "texture", which can be done with the Options module. A texture-mapped image can be retrieved from the Display window using ReadImageWindow and then written to a file using WriteImage.
Translucent textures are represented as image fields with opacities componentseither float opacities, or ubyte opacities and a float opacity map. As with opacities objects, translucent textured objects aren't meshed, but are tossed into the translucent sort-list, sorted by depth, and rendered. Translucent textures can be impored into DX with the ReadImage module, as long as ImageMagick support was included, using image formats that support opacity masks.
Mipmapping is the process of creating a set of filtered texture maps of decreasing resolution generated from a single high resolution texture and used to improve accuracy during texture mapping. This filtering process allows OpenGL to apply an appropriate level of detail to an object depending on the objects viewable size, reducing aliasing and flickering. The filter to apply to a texture can be set with the attributes "texture min filter" and "texture mag filter".
Attribute Name | Value | Description |
---|---|---|
texture | a texture map | specifies a texture map |
texture wrap s | "clamp to border", "clamp", "repeat", or "clamp to edge" | specifies how to apply the texture in the texture's s (horizontal) direction |
texture wrap t | "clamp to border", "clamp", "repeat", or "clamp to edge" | specifies how to apply the texture in the texture's t (vertical) direction |
cull face | "off", "front", "back", or "front and back" | specify which polygons should be drawn. culling basically turns off drawing of a polygon (increases rendering speed). |
light model | "one side" or "two side" | two sided lighting specifies that calculations are computed for both the inside and the outside sides of polygons. Two sided is particularly useful when polygons have no normals or where the auto-computed normals bear no resemblence to the outside of the rendered object |
texture min filter | "nearest" or "linear", "nearest_mipmap_nearest", "nearest_mipmap_linear", "linear_mipmap_nearest" or "linear_mipmap_linear" | specifies the filter to use to generate the set of mipmapped textures when the texture is rendered smaller than its actual size |
texture mag filter | "nearest" or "linear" | specifies the filter to use to generate the set of mipmapped textures when the texture is rendered larger than its actual size |
texture function | "decal", "replace", "modulate" or "blend" | specifies the texture mode. In decal mode with a three-component (RGB) the texture's colors replace the object's colors. With modulate, blend or with a four-component texture, the final color is a combination of the texture's and the object's colors. You use decal mode in situations where you want to apply an opaque texture to an object - if you were drawing a soup can with an opaque label, for example. For modulation, the object's color is modulated by the contents of the texture map. You need to use modulation to create a texture that responds to lighting conditions. Blending mode makes sense only for one- (A) or two-(LA) component textures.The replace function substitutes the object's color with the incoming texture color. |
Components
The object input must have a "colors," "front colors," or "back colors" component.
Script Language Examples
electrondensity = Import("/usr/local/dx/samples/data/watermolecule"); isosurface = Isosurface(electrondensity, 0.3); camera1 = AutoCamera(isosurface, "front", resolution=300); camera2 = AutoCamera(isosurface, "top", resolution=300); image1 = Render(isosurface, camera1); image2 = Render(isosurface, camera2); Display(image1,where="X, localhost:0, view from front"); Display(image2,where="X, localhost:0, view from top");
electrondensity = Import("/usr/local/dx/samples/data/watermolecule"); isosurface = Isosurface(electrondensity, 0.3); from = Direction(65, 5, 10); camera = AutoCamera(isosurface, from); isosurface=Options(isosurface, "rendering mode", "hardware", "rendering approximation", "dots"); Display(isosurface,camera);
Example Visual Programs
MovingCamera.net PlotLine.net PlotTwoLines.net ReadImage.net ScaleScreen.net TextureMapOpenGL.net UsingCompute.net UsingMorph.net
See Also
Arrange, Collect, Filter, Image, Render, Reduce, Refine, ScaleScreen, Normals, FaceNormals, SuperviseWindow, SuperviseState, ReadImageWindow, Options
[ OpenDX Home at IBM | OpenDX.org ]