Computer Graphics | Preboard Solution 2019

5th Semester

Group B

Morgan

 

Short Question

Write a Program to draw a circle with radius 200 and center (500,700) using midpoint circle algorithm.













Comprehensive Questions

Define intensity attenuation and ambient light. Explain the procedure of Phong shading method for polygon rendering with its algorithm.
Intensity attenuation means that a surface close to the light source receives higher incident intensity from the source than a distant surface.
Ambient light means the light that is already present in a scene, before any additional lighting is added. It usually refers to natural light, either outdoors or coming through windows etc. It can also mean artificial lights such as normal room lights.





It is a more accurate method for rendering a polygon surface in which normal vector are interpolate and then illumination model are applied to each surface point. It is developed by Phong Bui Tuong which is also known as normal vector interpolation shading. Idea here is to interpolate the normal vector instead of light intensities and then apply the illumination model to each surface point. Following steps are carried out for rendering the polygon surface:
Ø Determine the average unit normal vector at each polygon vertex.
Ø Linearly interpolate the vertex normal over the surface of the polygon
Ø Apply an illumination model along each scan line to calculate projected pixel intensities for the surface points.
Algorithm:
Step 1: determine the average normal vector at each polygon vertex
Ø At each polygon vertex, we obtain a normal vector by averaging the surface normal of all the polygon sharing that vertex.
Ø Therefore unit normal vector at vertex V is given by:

Step 2: linearly interpolate the vertex normal over the surface of the polygon:

Step 3: apply the illumination model along each scan line to calculate the pixel intensities for the surface point.
Step 4: End
Write a short note on:
RGB color model
Animation sequence design
RGB color model
 





The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors.
The name of the model comes from the initials of the three additive primary colors, red, green and blue.
Main diagonal => gray levels
black is (0, 0, 0)
white is (1, 1, 1)
Hue is defined by the one or two largest parameters
Saturation can be controlled by varying the collective minimum value of R, G and B
Luminance can be controlled by varying magnitudes while keeping ratios constant
 
Animation sequence design
The steps of animation design are:
Story board layout
Object definition
Key frame specification
4.Generation of in-betweens frame





Story board layout: The story board is an outline of the action. It defines a motion sequence as a set of basic events that are to take place. Depending on the types of an animation to be produced the story board could consist of set of rough sketches or it could be a list of the basic ideas for the motion.
 
Object Definition: An object definition is given for each participant in the action. Object can be define in term of basic shapes such as polygon or splines. In addition, associated movement for each object are specified along with the shape.
 
Key frames: Key frame is the detail drawing of the scene at the certain time in the animation sequence. In animation and film making, key frame is the drawing that defines starting and ending point of any smooth transition. Within each key frame, each object is positioned according to the time for that frame. Some key frame are chosen at extreme position in the action, other are spaced so that the time interval between the key frames is not too great. The position of the key frame on the film defines the timing of the movement. Key frames are the important frames during which an object change its size, direction, shape or other properties.
 





In Between: In between are the intermediate frames between the key frames. The number of in-betweens needed is determined by the media to be used to display the animation. Film requires 24 frame per second and graphic terminals are refreshed at the rate of 30 to 60 frames per second. Typically the time interval for the motion are set up so that there are from 3 to 5 in-betweens for each pair of key frames. Depending upon the speed specified for the motion some key frames can be duplicated.
 

KIST

SET B

Set A





IMS

Group B

Digitize the line with endpoint A (1, 9) and B (6, 1) using DDA line drawing algorithm. Show all necessary steps.





 
Describe how CRT works with internal diagram.
Electrons are generated by electron gun are filtered by control grid. An electron beam passing the control grid gets direction from focusing system. The horizontal and vertical deflection plates preserve the direction of electron beam to strike on internal part of phosphor coated screen. When electron beam strikes the phosphor at a point, the point generates light (blink).
How composite transformation is advantageous? Prove two successive rotation are additive.

Compare and contrast Raster Display with Random Display.
The difference between Raster display and Random display are as follows:-

Raster Scan Display

Random Scan Display

1. It has poor or less resolution because picture definition is stores as a intensity value. 1. It has high resolution because it stores picture definition as a set of line commands
2. Electron beam is directed from top to bottom and one row at a time on screen, but electron beam is directed to whole screen. 2. Electron beam is directed to only that part of screen where picture is required to be drawn, one line at a time so also called vector display.
3. It is less expensive. 3. It is Costlier than Raster scan Display.
4. Refresh rate is 60 to 80 frame per second 4. Refresh rate depends on the number of lines to be displayed i.e. 30 to 60/sec.
5. It stores picture definition in Refresh buffer also called frame buffer. 5. It stores picture definition as a set of line commands called refresh display file.
 
6. It contains shadow, advance shading and hidden surface technique so gives the realistic display of scenes. 6. It does not contain shadow and hidden surface technique so it cannot give realistic display of scenes.
7. It uses pixels along scan lines for drawing an image. 7. It is designed for line drawing applications and uses various mathematical functions to draw.

 





Group C

You are provided with the clipping rectangle with coordinate A(10,10),B(10,20),C(20,10),and D(20,20). Clip the given line PQ with the coordinate
P (-10,-20) and Q (10, 10) using Kohen-Sutherland line clipping algorithm.

Compare parallel projection with perspective projection.
The comparison of parallel and perspective projection are as followas:-

                  Parallel Projection

Perspective Projection

1. Parallel projection is located at infinite points. 1. Perspective projection is located as a finite point.
2. Parallel projection doesn’t form a realistic picture. 2. Perspective projection forms a realistic picture.
3. Parallel projection can preserve the relative proportion of an object. 3. Perspective projection cannot preserve the relative portion of an object.
4. Projector in parallel projection is parallel. 4. Projector in perspective projection is not parallel.
5. Parallel projection represents the object in a different way like telescope.      5. Perspective projection represents the object in three dimensional way.
6. The lines in parallel projection are parallel. 6. The lines of perspective projection are not parallel.
7. Parallel projection can give the accurate view of object. 7. Perspective projection cannot give the accurate view of oject.

CAB

In this method, the surface is specified by the set of vertex coordinates and associated attributes. As shown in the following figure, there are five vertices, from v1to v5.
Each vertex stores x, y, and z coordinate information which is represented in the table as v1: x1, y1, z1.
The Edge table is used to store the edge information of polygon. In the following figure, edge E1lies between vertex v1 and v2 which is represented in the table as E1: v1, v2.
Polygon surface table stores the number of surfaces present in the polygon. From the following figure, surface S1is covered by edges E1, E2 and E3 which can be represented in the polygon surface table as S1: E1, E2, and E3.





 
 
Ambient lightis the light that enters a room and bounces multiple times around the room before lighting a particular object. Ambient light contribution depends on the light’s ambient color and the ambient’s material color.
   Diffuse light represents direct light hitting a surface. The Diffuse Light contribution is dependent on the incident angle. For example, light hitting a surface at a 90 degree angle contributes more than light hitting the same surface at a 5 degrees.
Diffuse light is dependent on the material colors, light colors, illuminance, light direction and normal vector.
A polygon surface is rendered using Phong shading by carrying out the following steps:
Determine the average unit normal vector at each polygon vertex
Linearly interpolate the vertex normals over the surface of the polygon.
Apply an illumination model along each scan line to calculate projected pixel intensities for the surface points
VisibleSurface DetectionMethods is used to identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by other opaque surfaces along the line of sight (projection) are invisible to the viewer.
In this Z-buffer, each surface is processed separately one pixel position at a time across the surface. The depth values for a pixel are compared and the closest (smallest z) surface determines the color to be displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.
Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 ≤ depth ≤ 1).
The frame buffer is used to store the intensity value of color value at each position (x, y).
Algorithm
Step-1 − Set the buffer values −
Depthbuffer (x, y) = 0
Framebuffer (x, y) = background color
Step-2 − Process each polygon (One at a time)
For each projected (x, y) pixel position of a polygon, calculate depth z.
If Z > depthbuffer (x, y)
Compute surface color,
set depthbuffer (x, y) = z,
framebuffer (x, y) = surfacecolor (x, y)
 

St. Xaviers

Group B

What is projection? Derive the transformation of parallel projection.
Projection in computer graphics means the transformation of a three-dimensional (3D) area into a two-dimensional (2D) area. The most frequently used projections are:
Parallel
Perspective
Parallel Projection: Parallel projection is the transformation of a three-dimension area into a plane. In this projection, all projection rays are parallel. It is determined by a table (plane) and by a projection direction (vector), which cannot be parallel with the table. In accordance with the projection direction, we can split parallel projection into the following types:
Vertical orthogonal projection
Oblique (slant)
Most frequently used is the orthogonal projection where the projection trays are orthogonal to the table. The method of orthogonal projection neglects one of coordinates.
The orthogonal parallel projection into the plane xy neglects z-coordinate. To the point P=(x,y,z) corresponding to in the projection the point P’ = (x, y). The matrix representation of this transformation is:


We can express the oblique parallel projection of the point (x, y, z) to the plane xy as follows:
x’ = x + z. (a.cos(ɵ)),
y’ = y + z. (a.sin(ɵ))
Where the parameter a determines elongation for the axis z and the angle a is deviation from the axis x. If the parameter a=0, then it is a case of an orthogonal projection.





Describe how to create different line styles in java 2D.





Describe illumination models. Derive the diffuse reflection illumination equation.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object. A surface rendering algorithm uses intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a screen.
Diffuse reflection:
An objects illumination is as important as its surface properties in computing its intensity. The object may be illuminated by light which does not come from any particular source but which comes from all directions. When such illumination is uniform from all directions, the illumination is called diffuse illumination. Basically the diffuse illumination is a background light which is reflected from walls, floor, and ceiling.
When we assume that going up, down, right and left is of same amount then we can say that the reflections are constant over each surface of the object and they are independent of the viewing direction. Such a reflection is called diffuse reflection. In practice, when object is illuminated, some part of light energy is absorbed by the surface of the object, while the rest is reflected. The ratio of the light reflected from the surface to the total incoming light to the surface is called coefficient of reflection or the reflectivity. It is denoted by R. The value of R varies from 0 to 1. it is closer to 1 for white surface and closer to 0 for black surface.
The diffuse reflections from the surface are scattered with equal intensity in all directions, independent of the viewing direction. Such surfaces are sometimes referred to as ideal diffuse reflectors or Lambertian reflector, since radiated light energy from any point on the surface is governed by Lambert’s cosine law. This law states that the reflection of light from a perfectly diffusing surface varies as the cosine of the angle between the normal to the surface and the direction of the reflected ray. This is illustrated in figure below:


How many k bytes does a frame buffer need in a 600 X 400 pixel? And find out the aspect ratio of the raster system using 8 X 10 inches screen and 100 pixel/inch.






 
What is polygon rendering? Difference the different types of shadowing.
RENDERING means giving proper intensity at each point in a graphical object to make it look like real world object. Different Rendering Methods are:
Constant Intensity Shading
Gouraud Shading
Phong Shading
Flat shading is the simplest shading model. Each rendered polygon has a single normal vector; shading for the entire polygon is constant across the surface of the polygon. With a small polygon count, this gives curved surfaces a faceted look.
Phong shading is the most sophisticated of the three methods you list. Each rendered polygon has one normal vector per vertex; shading is performed by interpolating the vectors across the surface and computing the color for each point of interest. Interpolating the normal vectors gives a reasonable approximation to a smoothly-curved surface while using a limited number of polygons.
Gourard shading is in between the two: like Phong shading, each polygon has one normal vector per vertex, but instead of interpolating the vectors, the color of each vertex is computed and then interpolated across the surface of the polygon.

Group C

List out some of the algorithms for visible surface detection. Describe depth sorting method.
Depth Buffer (Z-Buffer) Method: The basic idea is to test the Z-depth of each surface to determine the closest (visible) surface.
Scan-Line Method: This method has a depth information for only single scan-line.
A-Buffer Method: The A-buffer expands on the depth buffer method to allow transparencies.
Depth Sorting Method: First, the surfaces are sorted in order of decreasing depth. Second, the surfaces are scan-converted in order, starting with the surface of greatest depth.
Depth sorting algorithm:
The painter’s algorithm is based on depth sorting and is a combined object and image space algorithm. It is as follows:
Sort all polygons according to z value (object space); Simplest to use maximum z value
Draw polygons from back (maximum z) to front (minimum z)
Digitize a line using BLA for the line having end points (4, 5) and (10, 9).