Thursday, November 15, 2012

scenario6

I have a scenario where i need to load a table called "name_address" first for storing all the claimant name with a generation of a number (using given formula).

This number will be used in other main tables like claim table , transaction .. etc

Can u please help me whether constraint based loading will be useful or i need to specify target load plan.

Scenario5

Source              
Col_1    Col_2       
D1       A1          
D1       A2          
D2       A3         
D2       A4           

D2       A5           
D3       A6           
D3       A7 


Target 
Col_1    Col_2   
D1     A1-A2 
D2    A3-A4-A5 
D3     A6-A7


Answer :

The Following can be obtained by using a Sorter transformation , Expression transformation and a aggregator transformation.

Let me show the mapping now.,
This shows the mapping for the above scenario.
Now lets see them in detail
1) Usual Sorter on Col1
2) Expression t/f like below
3) Aggregator t/f as below

Create different flatfiles as output

We have a scenario in our project where i have to create different flatfiles as output based on the company number

The issue is 
1) we run this workflow once in every month
2) New company numbers and corresponding records may be added anytime during the month ., So each time wen a company is added we have to create a flatfile for that also
3) The name of the flatfile should be "Loss history report for" + company number
4) One more thing is every month wen the workflow runs , the flatfiles have to be places in that month folder . eg : In january the files shud be placed in Jan2012 folder.
The folders will be already present.,
5) Is it possible to create the folders also inside target every month ?

Does anyone have answer for this?


Wednesday, November 14, 2012

Java Transformation scenario2

Here i will show the mapping for scenario where we need the input records from emp table to be inserted into target thrice.

  


One more requirement is to change ename to "java" || job

They both are implemented the below way :


check out this link for other java scenario

Java Transformation

Let me show the Simple mapping involving Java transformation



Here the source is emp table and the target is rangasp table in oracle.

The requirement here is to change all the employee name to "java".
The java code has been written in such a way that the ename of all records will be changed to "java"



The left side contains all the input and output ports.
Generaterow() is used to generate a row for each records.

The output is something like this :

eno ename
721 java
678 java
899 java

Please do comment and ask if you have any doubts!! thankyou

Tuesday, November 13, 2012

Scenario 4

Input :

empno    ename    job
7219     Sri      CLERK
7782     Uma    MANAGER
7839     Ragu   PRESIDENT
7934     Sendil   CLERK

Output :

Repeation of the rows three times and also change the ename as "Srivatsan"
empno    ename    job
7219     Srivatsan CLERK
7219     Srivatsan CLERK
7219     Srivatsan CLERK
7782     Srivatsan MANAGER
7782     Srivatsan MANAGER
7782     Srivatsan MANAGER
7839     Srivatsan PRESIDENT
7839     Srivatsan PRESIDENT
7839     Srivatsan PRESIDENT
7934     Srivatsan CLERK
7934     Srivatsan CLERK
7934     Srivatsan CLERK


Post your mapping , one lucky winner will be selected!!



Transaction control Transformation

Lets discuss about transaction control transformation in this video :

I will show you a scenario using transaction control statement and explain you wen it has to be used.

Here we can see three components

1) expression trans

2) transaction ctrl t/f

3) target

The source for us is the emp table of scott schema

the requirement is to split the source rows into different flatfiles based on the dept number.


that is:

dept no : 10 , 20 ,30 need to splited and saved in seperate files

We can ask a question like "why cant this be done easily by a router"

yep a "good ques : the answer is "wen the dept no changes dynamically how we will be able to get the router groups to change??

So this is how this scenario is handled..

here we have filed where we are checking this condn : IIF(v_DEPTNO_2 != DEPTNO, 0, 1)

if a dept number comes for the first time then this will be zero and our transaction control t/f will create a flatfile with corresp rows.

filename is a field used to provide name of flatfile which need to be generated dynamically.

Based on the column where we gave condn the tc t/f does the following commiting activity
IIF(DEPTNO_SPLITTING = 0, TC_COMMIT_BEFORE, TC_CONTINUE_TRANSACTION)

here wenever a new dept number comes a commit is been done and a flatfile is created and one imp note is "the input has to be sorted initially"




In this way the files are created on each dept numbers

Thankyou , please comment!!

Normalizer transformation scenario

How can we do this:
StudentName Subject1 Marks1 Subject2 Marks2
Sam Maths 100 English 80
John Maths 85 English 75

to the target:

StudentName Subject Marks
Sam Maths 100
Sam English 80
John Maths 85
John English 75

Answer :


In this way we can get the above output

Wednesday, October 31, 2012

How will you approach this : 3

Input :

Student NameSubject NameMarks
SrivatsanMaths100
SendilMaths95
RaguPhysical Science96
MuthuMaths75
UmaLife Science98
RajaLife Science92
JothiPhysical Science97
KarthickLife Science91





Output :

The output is to team up based on top marks . Each team should have 2 members

Students NameTeam NameMarks
Srivatsan/UmaTeam1100/98
Sendil/RajaTeam295/92
Ragu/JothiTeam396/97
Muthu/KarthickTeam475/91


Tuesday, October 30, 2012

How will you approach this : 2

Input :

Student NameMathsLife SciencePhysical Science
Sam1007080
John7510085
Tom8010085


Output :


Student NameSubject NameMarks/Avg Marks
SamMaths100/85
SamLife Science70/90
SamPhysical Science80/83.33
JohnMaths75/85
JohnLife Science100/90
JohnPhysical Science85/83.33
TomMaths80/85
TomLife Science100/90
TomPhysical Science85/83.33

Wednesday, October 24, 2012

PL/SQL : Simple Procedure Example

CREATE OR REPLACE PROCEDURE EXAMPLE_PROC
IS
CURSOR EMP_CUR IS
SELECT *
FROM EMP;

EMP_REC EMP_CUR%ROWTYPE;

BEGIN

OPEN EMP_CUR;

LOOP

FETCH EMP_CUR INTO EMP_REC;
EXIT WHEN EMP_CUR%NOTFOUND ;
DBMS_OUTPUT.PUT_LINE(EMP_REC.ENAME);
END LOOP;

CLOSE EMP_CUR;
END;



COMMAND USED TO EXECUTE PROCEDURE :
EXEC EXAMPLE_PROC;

OUTPUT :

JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER

Monday, October 22, 2012

PL/SQL : Simple Explicit Cursor Example

DECLARE

CURSOR EMP_CUR IS
SELECT *
FROM EMP;

EMP_REC EMP_CUR%ROWTYPE;

BEGIN

OPEN EMP_CUR;

LOOP

FETCH EMP_CUR INTO EMP_REC;
EXIT WHEN EMP_CUR%NOTFOUND ;
DBMS_OUTPUT.PUT_LINE(EMP_REC.ENAME);
END LOOP;

CLOSE EMP_CUR;
END;


INPUT :
NA

OUTPUT :

JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER

Sunday, October 21, 2012

PL/SQL : Simple Query with two sections

DECLARE                     -- Declaration section
v_ename emp.ename%type ;

BEGIN                        -- Execution section
Select ename into v_ename
from emp
where empno = '7219';

DBMS_OUTPUT.PUT_LINE(v_ename);

END;


INPUT : NA
OUTPUT : John (the corresponding ename for empno given will be displayed)

Wednesday, October 10, 2012

Connected Lookup vs Unconnected Lookup

Connected Lookup :

1) Connected Lookup can return more than one value.The number of Output port can be more than one.

2) The Cache will store all the lookup columns.

3) Can use both Static and dynamic cache.

4) As name suggests , these lookups are connected with the flow and they get their input from pipeline.



Unconnected Lookup :

1) They can return only one value.

2) The cache will store the return port and output port.

3) Can use only static cache.

4) These lookups are not connected and get their input values through expression (:LKP)

Friday, October 5, 2012

Source Qualifier Transformation and its Properties

Source Qualifier Transformation is an active Transformation. The reason is the output rows count can be modified by using option like "Select Distinct".

There are various properties in Source Qualifier transformation which are very useful for us like

1) We can get the Output from Source Qualifier as Sorted by using the property called "Number of Sorted Ports".

2) We can filter in the Source level itself so that unnecessary records wont be taken into for further process using the Filter property.

3) We can use the "Select Distinct" property to Avoid the Duplicate output from Source Qualifier.

4) We have option to write Pre and Post Sql ., Which will be executed before and after the main Query.

5) We can use the Custom / User defined Sql property to write our Own Sql , So that output is generated based on that.
Note : When we use a user defined/Custom Sql , the Select distinct , Sorter , filter ..etc wont work as it will be overridden by the Custom Sql.


Wednesday, October 3, 2012

Aggregating without using Aggregator Transformation

In informatica level the aggregator transformations output can be obtained by replacing it with the following transformations

1) Sorter transformation -- Used to sort the data asc/dec

2) Expression Transformation -- basic transformation used almost in all the mappings

3) Filter Transformation -- To obtain the records with specific condn.

Aggregator transformation will reduce the number of records by aggregating on some specific columns given by us .

The Reverse transformation of Aggregator is Normalizer Transformation which will increase the number of Records.

Tuesday, October 2, 2012

Constraint Based Loading

Whenever we have only one Source Qualifier transformation and multiple targets , our approach in specifying the order of the target is by using the property called "Constraint based loading".

By enabling the Check box the targets will be loaded in a way like the master table is loaded first and then the depending tables .

If we have more than one Source Qualifier transformation then we have to use Target Load plan.

Constraint based loading

Target Load Plan

Target Load plan is used to load the targets in whichever order we require .

Target Load plan can be used when the mapping has more than one Source Qualifier transformation.
The Order can be specidfied by going to mappings -> Target load plan.

Here we have to order the various Source Qualifier transformations used in our mapping. Thereby it will load the corresponding Target.

Target Load Plan

What if our mapping has only one Source Qualifier t/f and has to load many targets ?
Please check the following link


Scenario 3 - Aggregator Transformation Example

Input :

Student NameSubject NameMarks
SamMaths100
TomMaths80
SamPhysical Science80
JohnMaths75
SamLife Science70
JohnLife Science100
JohnPhysical Science85
TomLife Science100
TomPhysical Science85

Output :

Student NameMathsLife SciencePhysical Science
Sam1007080
John7510085
Tom8010085


Answer :

Here the transformation which we have to use is Sorter and Aggregator .

Lets see the approach :

1) The Input from the Source Qualifier is sent into the Sorter transformation , And Sorted based on Student name and Subject.
The following output is obtained
Student NameSubject NameMarks
SamMaths100
SamLife Science70
SamPhysical Science80
JohnMaths75
JohnLife Science100
JohnPhysical Science85
TomMaths80
TomLife Science100
TomPhysical Science85

2) The output from Sorter transformation is sent into the aggregator transformation and Group by Student name is selected . As we need the final output grouped with student name.
And also in the aggregator transformation three columns are created like
1) Maths --- with formula -- Max(Marks, Subject = 'Maths')
2) Life Science --- with formula -- Max(Marks, Subject = 'Life Science')
3) Physical Science --- with formula -- Max(Marks, Subject = 'Physical Science')

3) Now Student name and Maths , Life Science , Physical Science are connected to the target.
The output obtained is as follows :

Student NameMathsLife SciencePhysical Science
Sam1007080
John7510085
Tom8010085


Joining Heterogeneous Sources

Joining Heterogeneous Sources can't be done using the Source Qualifier Transformation.

Source Qualifier transformation is used to join homogeneous sources.Whenever we are using a flat-file the Source Qualifier properties gets disabled and we cant use it to join the Sources.

This is one of the reason why we have Joiner transformation , Also when we have Sources which don't have a common column / primary-foreign key relationship , we have to go to joiner transformation.

Joiner T/f is used for following

1) Join data from different Relational Databases.

2) Join data from different Flat Files.

3) Join relational sources and flat files.


Sources which don't have common column can be joined by creating a dummy column using Sequence generator transformation.


Monday, October 1, 2012

How will you approach this : 1

Input :

NameYearTransportHouse RentFood
Sri20002001500500
Uma20003001200300
Ragu20003001350350
Sri20013001550450
Uma20013501200290
Ragu20013501400350

Output :

NameYearExpense TypeExpense
Sri2000Transport200
Sri2000House rent1500
Sri2000Food500
Uma2000Transport300
Uma2000House rent1200
Uma2000Food300
Ragu2000Transport300
Ragu2000House rent1350
Ragu2000Food350

Scenario 2

Here we have a scenario to get the below mentioned output from the given input. Let us see how to solve this scenario :

Input

Employee name              Basic            DA             HRA          MedicalA             FoodA
John                              5000             200             1000               500                     500                                  
George                          6000             100               800               500                     200                              
Carol                             3000             700              700                800                     900      

Output Required :

Employee name          Salary_type       Salary_Amt

John                             Basic                  5000                         
John                             DA                     200                         
John                             HRA                  1000                           
John                             MedicalA            500                                 
John                             FoodA                500                                   
George                         Basic                  6000                              
George                         DA                     100                             
George                         HRA                   800                                  
George                         MedicalA            500                                          
George                         FoodA                200                                 
Carol                            Basic                   3000                           
Carol                            DA                      700                        
Carol                            HRA                   700                         
Carol                            MedicalA             800                                   
Carol                            FoodA                 900                   



Answer :

1) The input from the Source has to be connected to Normalizer transformation  .

2) In the normalizer transformation we will get two outputs ., One is Employee name and other is the Amount. The GCID number is also generated , which will give the salary type.

3) In this way the normalizer transformation generates transpose of the input(table).

4) The output from normalizer can be given to an expression transformation to get the Salary type which can be generated from GCID number.
It will be like

Employee name          GCID           Salary_Amt

John                             1                      5000                         
John                             2                      200                         
John                             3                      1000                           
John                             4                      500                                 
John                             5                      500                                   
George                         1                      6000                              
George                         2                      100                             
George                         3                      800                                  
George                         4                      500                                          
George                         5                      200                                 
Carol                            1                      3000                           
Carol                            2                      700                        
Carol                            3                      700                         
Carol                            4                      800                                   
Carol                            5                      900 


4) Its an active transformation and the reverse of aggregator transformation which reduces the number of output.                     

Scenario 1

Suppose we have to load three targets in such a way like

1) the first target should be loaded with 1,4,7... records
2) the second target should  be loaded with 2,5,8... records
and
3) the third target should be loaded with 3,6,9 ... records

How can this be achieved?


Answer :

1) Lets have a expression transformation and load all the data from Source Qualifier to it.

2) Now we will create a Sequence generator transformation to generate the Sequence for identifying the records to be loaded for various targets.

3) Let the Connect the Sequence generator to the expression transformation

4) Lets use a router transformation to distribute the records to the three targets by creating three groups in it. Namely target1 , target2, target3

5) The following conditions are given on the three targets in router transformation to properly send the records to respective targets.
target1 --- mod(Sequence_no,2) = 1
target2 --- mod(Sequence_no,2) = 2
target3 --- mod(Sequence_no,2) = 0   

6) The Outputs of router is connected to respective targets.

How to avoid Duplicate Rows in Informatica

Duplicate Rows in informatica can be avoided by selecting the distinct Check box in the Source Qualifier Transformation.

If  the Source is a Flatfile then it cannot be Prevented by Source Qualifier to give Distinct Data.
For avoiding the duplicate rows we have to use a transformation called Sorter transformation and Select the option called as "Distinct".

One more option to avoid the duplicate rows is by using an aggregator transformation . A groupby in any one of the duplicate column will provide us a result of non duplicate records.