qsn

Actionreserach_AssignmentInstructions x20170512164208actionresearch_assignmentinstructions3.zip
 

Please find the assignment instructions in attached folder

Don't use plagiarized sources. Get Your Custom Essay on
qsn
Just from $13/Page
Order Essay

do part 1 only 

refer sample papers and try to complete it on time. thank you.

 

TOPIC:JOB SEARCH PROCESS AS A NETWORKING ENIGINEER

This assignment consists of four papers

1) Introduction and Methodology

2) Literature review and proposal

3) First Iteration

4) Final paper (Last three iterations)

1st Paper (Due- 14th May 2017)

Introduction and Methodology: (Minimum 5 pages of content)

· Introduction : what you plan to accomplish and why(2 pages of Content)

· Methodology :research paper about Action Research(3 pages of content) (include reasons and justification for approach)

· Minimum of five (5) professional references

2nd Paper (Due 21st May)

Literature Review and Proposal: (Minimum 6 pages of content)

· Literature Review is a research paper about your topic

· Proposal – this is your plan

· Briefly summarize a paragraph each of the proposed (AR) iteration (at least 4 iterations)

· 6-8 pages, minimum of eight

· 8 professional references

3rd Paper (Due 28th May)

1st Iteration: (Min 4 pages of content)

· Plan –include a description of all the planning activity that has taken place may include agendas or other manuscripts as appropriate

· Action – at least one page in length, should include a description of that actual activity

· Observation – at least one page in length, should include a description of all the information collected as well as any analysis

· Reflection – at least one page in length, should include a description of your thoughts about what happened, what went well, as well as not so well.

· 6-8 pages, minimum of eight

Final Paper (Due 4th June)

2nd, 3rd and 4th Iterations and summary:

· Final paper must include all the contents of first three papers along with 2nd, 3rd and 4th Iterations.

· Instructions to write 2nd, 3rd and 4th Iterations re same as 1st iteration paper.

· 1 page of summary or conclusion of entire paper.

Other Instructions:

· All Sample papers are attached for your reference.

· No plagiarized content

· Please provide genuine references that are accessible online

Actionresearch_Assignmentinstructions/Actionreserach_Assignment Instructions x
TOPIC: JOB SEARCH PROCESS AS A NETWORKING ENIGINEER

This assignment consists of four papers

1) Introduction and Methodology
2) Literature review and proposal
3) First Iteration
4) Final paper (Last three iterations)

1st Paper (Due- 14th May 2017)

Introduction and Methodology: (Minimum 5 pages of content)

· Introduction : what you plan to accomplish and why(2 pages of Content)
· Methodology :research paper about Action Research(3 pages of content) (include reasons and justification for approach)
· Minimum of five (5) professional references

2nd Paper (Due 21st May)

Literature Review and Proposal: (Minimum 6 pages of content)

· Literature Review is a research paper about your topic
· Proposal – this is your plan
· Briefly summarize a paragraph each of the proposed (AR) iteration (at least 4 iterations)
· 6-8 pages, minimum of eight
· 8 professional references

3rd Paper (Due 28th May)

1st Iteration: (Min 4 pages of content)

· Plan –include a description of all the planning activity that has taken place may include agendas or other manuscripts as appropriate
· Action – at least one page in length, should include a description of that actual activity
· Observation – at least one page in length, should include a description of all the information collected as well as any analysis
· Reflection – at least one page in length, should include a description of your thoughts about what happened, what went well, as well as not so well.
· 6-8 pages, minimum of eight

Final Paper (Due 4th June)

2nd, 3rd and 4th Iterations and summary:

· Final paper must include all the contents of first three papers along with 2nd, 3rd and 4th Iterations.
· Instructions to write 2nd, 3rd and 4th Iterations re same as 1st iteration paper.
· 1 page of summary or conclusion of entire paper.

Other Instructions:

· All Sample papers are attached for your reference.
· No plagiarized content
· Please provide genuine references that are accessible online

Actionresearch_Assignmentinstructions/Sample papers/1-Introduction & Methodology x
Introduction

In this modern era, new technologies are emerging and there are always upgrades occurring. Many of these technologies are used in different fields like robotics, aerospace engineering, and Nano technology. Human race has progressed to a greater extent with the use of technologies in various fields.
Data storage and management have gained primary importance recently. Enormous amounts of data are being used and stored by organizations.Putting away and dealing with this immense measure of information has dependably been testing a superior world despite the fact that numerous related innovations have been taking off as of late. The term “Big data” has emerged and Hadoop technology uses a set of algorithms to process large clusters of data.
This project involves getting a job opportunity as a HADOOP developer so that I can learn many things about the company – what it is, where is comes from, how it can be applied to business processes, and how to get started using it.
With this internship, I would like to do the research job opportunities at Hadoop. The primary purpose for choosing this is for the ongoing latest technology in the software industry. My undergraduate is in computer science and I gained some knowledge on SQL and databases. However, while completing my masters, I have gained good knowledge about big data, and where its types can be uniquely stored and processed in Hadoop. I believe that social media data should definitely need Hadoop for their unlimited competition and real-time decisions that include market share.
Hadoop has its roots at Yahoo!, whose internet web search tool business consists of constant preparing of a lot of Web page information. Eric Baldeschwieler challenged Owen O’Malley, co-founder of Horton works to solve a hard problem: store and process the information on the web in a basic, adaptable and financially plausible way. They took a gander at customary storage approaches; however, they immediately figured out that they simply weren’t going to work for the sort of information (a lot of it unstructured) and the sheer amount Yahoo! would need to manage (Baldeschweiler, 2013). It is a data management platform as it offers lower-cost storage framework and an open source development (Yuhanna, 2014). The research suggests that the broad enterprise can embrace and use Hadoop to create big data value (Business Value of Hadoop, 2013).
Hadoop is turned into a foundational engineering at Yahoo and it fundamentals an extensive variety of business-basic applications. Organizations in about every vertical began to embrace Hadoop. By 2010, the group had contracted a large number of clients and a wide undertaking energy had been created (Baldeschweiler, 2013).
While each association is different, their big data are frequently fundamentally the same. Hadoop, as a discriminating bit of emerging modern data architecture, is gathering enormous measures of data over social media activity, click stream data, web logs, financial transactions, features, and machine/sensor data from gear in the field (Business Value of Hadoop, 2013).
According to Baldeschwieler (2013), Hadoop is a system for adaptable and reliable dispersed information stockpiling and transforming. It takes into consideration the handling of vast information sets crosswise over groups of machines utilizing a basic programming model. It is intended to scale up from single servers to a large number of machines, amassing the nearby processing and stockpiling from every server.

Methodology

Action Research:

Action research is a methodology of methodical request that empowers individuals to discover viable solutions to genuine issues experienced in everyday life. Action research has had a long and recognized family that compasses in excess of 50 years over a few continents. Generally, the term action research has been since a long time ago related with the work of Kurt Lewin, who viewed this research methodology as cyclical, element, and shared in nature. Through rehashed cycles of arranging, watching, and reflecting, people and gatherings occupied with activity research can actualize changes needed for social change (Lavery, 2014).
According to Corey (1953), activity research is the methodology by which practitioners endeavor to study their issues experimentally in order to guide, correct, and assess their activities and choices. Understanding action research includes recognizing how the reactions to these struggles helped create new methodologies to and understanding of substantive changes over the long time and across physical, social, and emotional boundaries (Glassman, 2014).
The distinction between action research and other type of research is that during the process, researchers will need to create and utilize a scope of abilities to attain their points, such as careful planning, sharpened perception and listening, assessment, and critical reflection. The traditional research is conducted to report and publish conclusions that can be generalized to larger populations, whereas the action research is conducted to take action and effect a positive change in the environment that was identified. The traditional research can be done in the environment where can be controlled, but, the action research should be done in school and classrooms (Reason, 2008).

Purpose:

The reason for the exploration was twofold. In the first place, it was to investigate how teachers see and interpret affirmation gathered through action research ventures drove inside the school situations. Also, it was to explore how these understandings could be used to teach master hone inside the schools(Lavery, 2014).

Stages of Action Research: The process of action research is just ineffectively portrayed in terms of a mechanical sequence of steps. According to McTaggart (2007), it is generally thought to include a spiral of insightful toward self-reflective cycles of the following:
· Planning a change
· Acting and observing the process and consequences of the change
· Reflecting on these processes and consequences
· Replanning
· Acting and observing again
· Reflecting again, and so on.

Initial Reflection:Action research emerges from an issue, predicament, or uncertainty in the circumstances in which professionals end up. It might be at the level of a general concern, an apparent need, or a course-related issue (Lewin, 1952).

Planning:The most essential outcome of the planning stage is a definite arrangement of the activity that is intended to take or to make the changes (Lewin, 1952).

Action:In the light of experience and feedback, the minor deviations from the plan and record have the deviations with the reasons behind them. Moreover, the new insights can be incorporated in the current project or can be recorded for the future project.

Observation: Detailed observation, checking, and recording empowers us to survey the impact of the action and hence the adequacy of the proposed change. It is better to maintain a dairy or journal to record the observations and insights of the project (Lewin, 1952).

Reflection: At the end of every cycle, it is important to reflect the observations that has made in the dairies or journals.

Figure 1: Integration of two organizational schemes for the step-by-step process of action research. Source: (Reason, 2008).

Characteristics of Action Research:
According to Schuler (1996), the components of action research are of five C’s. They are:
· Commitment: The factors that all the participants who are involved in the project. The participants need some time to trust each other and to observe the practice, changes, approaches, documents, reflects and finally the results.
· Collaboration: In action research, the power relations among participants are same; every individual contributes and has a stake. Collaboration is not the same as compromise; however, it includes a cyclical procedure of imparting, giving and taking.
· Concern: The concern of action research means that all participants will improve a group of critical friends.
· Consideration: Reflective practice is the careful audit of one’s expert actions. Reflection obliges concentration and careful considerations as one looks for examples and connections that will produce significance within the investigation. Reflection is a challenging, focus and discriminating appraisal of one’s own conduct as a method for one’s craftsmanship.
· Change: The change is difficult and it is an important element in remaining effective as a human.
In my opinion, action research is a form of investigation designed for the projects to get the employment opportunities as a HADOOP developer. It is a successful tool to get a job in Hadoop. This action research gave me the idea to split the whole process into small iterations. These iterations include the plan, action, observation, and reflection of action research in the particular iteration. This iteration process helped me work efficiently to achieve my targets. It gave some knowledge and skills about the Hadoop and showed me the best way to gain employment.

Actionresearch_Assignmentinstructions/Sample papers/2-Literature review and proposal x
Literature Review:

Hadoop is open source reliable and scalable distributed computing platform that stores and process the data. It includes a fault-tolerant storage system known as Hadoop distributed file system (HDFS). HDFS is capable to store large amounts of data, grow incrementally, and survive the failure of major parts of the storage infrastructure without trailing data (What Is Hadoop?, 2008).
Hadoop influences a cluster of hubs to run Mapreduce programs enormously in parallel. A Mapreduce project comprises of two steps: the Map step methodologies information and the Reduce step assembles intermediate results into a last result. Each single cluster node has a neighborhood record framework and nearby CPU on which to run the Mapreduce programs. Information is broken into information pieces, stored over the local records of distinctive hubs, and imitated for unwavering quality. The nearby records constitute the record framework called Hadoop Distributed File System (HDFS). The number of hubs in each one group differs from hundreds to a huge number of machines. Hadoop can likewise consider a certain set of fail-over situations (Lay, 2010).
Hadoop has developed into the system of decision for architects analyzing enormous information in field such as money, promoting, and bioinformatics (Zaharia, n.d.). At the same time, the changing nature of information itself, along with a yearning for speedier criticism, has started interest for new methodologies, counting devices that can convey specially appointed, constant transforming, and the capacity to parse the interconnected information flooding out of social communities and versatile devices (Mone, 2013).
As Hadoop is a distinct tool, it is aimed at problems that require assessment of all accessible data. For instance, image processing and text analysis usually mandate that every single record be examined, and often infer in the perspective of similar records. Hadoop uses a procedure called Map Reduce to hold out this comprehensive analysis quickly (What Is Hadoop?, 2008)

History of Hadoop:

In 2002, when project Nutch began as an open source web crawler by Apache Foundation, a working crawler and inquiry framework immediately rose. Doug Cutting, the inventor of Apache Lucene, assessed that a framework supporting a billion file would cost around $30,000. They accepted that it would open up and democratized web crawler calculations. Yet soon they understood that their building design would not scale to billions of pages on the Web (Baldeschweiler, 2013).
Google had additionally confronted the same issue of dealing with billions of website page lists and they made an innovation to defeat this test. In 2004, Google distributed an alternate paper that presented MapReduce, a parallel programming model focused around utilitarian programming to process conveyed information. In 2005, the Nutch designer likewise made a working MapReduce usage in Nutch. All the major Nutch calculations had been ported to run utilizing MapReduce and Nutch distributed file system (NDFS) (Baldeschweiler, 2013).
NDFS and MapReduce were exceptionally guaranteeing innovations and turn out as an Apache autonomous sub project of Lucene task called Hadoop. In the meantime Doug Cutting offered by Yahoo!, which gave a devoted group and assets to transform Hadoop into a system that ran at web scale. In Jan, 2008, Hadoop graduated to a Top-Level Apache venture, affirming its prosperity. Presently Hadoop is being utilized and upheld by numerous different organizations other than Yahoo, such as Facebook, The New York Times, Cloudera, Hortonworks, and Last.fm (Mone, 2013).

Benefits of Hadoop:

1. Cost-effective: Apache Hadoop controls costs by putting away information more reasonably every terabyte than different stages. Rather than thousands too many thousands every terabyte, Hadoop conveys register and capacity for several dollars every terabyte.
2. Fault-tolerant: Fault tolerance is a standout amongst the most critical points of interest of utilizing Hadoop. Regardless of the possibility that individual hubs encounter high rates of disappointment when running employments on an extensive group, information is duplicated over a bunch so it can be recouped effectively even with circle, hub or rack disappointments.
3. Adaptable: The adaptable way that information is put away in Apache Hadoop is one of its greatest resources – empowering organizations to produce esteem from information that was already considered excessively extravagant to be put away and handled in conventional databases. With Hadoop, one can utilize different varieties of information, both organized and unstructured, to concentrate more important business experiences from a greater amount of your information.
4. Scalable: Hadoop is an exceedingly versatile storage platform, due to the fact that it can store and convey substantial information sets across clusters of many reasonable servers working in parallel (Business Value of Hadoop, 2013).

Responsibilities of a Hadoop Developer:

A Hadoop developer is responsible for the real coding or programming of Hadoop applications. This part is like that of a software developer. The following are some of the responsibilities a Hadoop developer:
· Hadoop advancement and usage
· Stacking from dissimilar information sets
· Pre-processing utilizing Hive and Pig
· Planning, building, introducing, designing and supporting Hadoop
· Interpret complex utilitarian and specialized prerequisites into definite configuration
· Perform investigation of incomprehensible information stores and reveal experiences
· Keep up security and information protection
· Make versatile and elite web administrations for information following
· Overseeing and conveying Hbase
· Test models and regulate handover to operational groups
· Propose best practices/models (Gothai, 2014).

Scope of Hadoop:

The Hadoop platform has tools that can remove the information from the source frameworks, whether they are log records, machine information or online databases and burden them to Hadoop in record time. It is conceivable to do changes on the fly as well, although more expound handling is better done after the information is stacked into Hadoop. Programming and scripting systems permit complex ETL employments to be sent and executed in a disseminated way. Quick enhancements in intelligent SQL tools settle on Hadoop a perfect decision for a minimal effort data warehouse (Yuhanna, 2014).

Proposal:

The proposal of my paper is completely dependent on the procedures that I am going to apply to get an employment as Hadoop developer. Finding a job in a good company is not an easy task as it requires a lot of hard work and technical skills to get into the work. This paper provides the action research cycles as it is a step-by-step process in finding a job. A Hadoop developer is a very high-level job, so I have to work hard to get an employment in it. This paper also includes iterations, which is the process to get an employment opportunity in finding a job.
The first iteration requires doing research on Hadoop programming. I would research Hadoop through online websites, journals and books, which gives me the basic idea on Hadoop and its position in the market. While doing this research, I’ll also gather basic information of roles and responsibilities of Hadoop developer. This completes my first step and pushes me little ahead towards my goal.
The second iteration involves selecting an institute and start training. In this iteration, as a part of my action research, I will select one of the best institutes to get trained in Hadoop. My skill set in java is limited and Hadoop is based on java framework. So, I would like to get trained in Java first and then continue to get trained in Hadoop. In this process of searching for an institute, I would like to inquire whether the professor would be able to teach me both Java and Hadoop and also know whether this training would be just theoretical or will it include even practical sessions. If all the above mentioned requirements are covered and I’m satisfied, I would go ahead with the training process. This iteration helps me in having a clear understanding of Hadoop programs and gear up myself for the interviews.
For the third iteration, I would meet with Hadoop professionals to better understand the work culture and some of the roles and responsibilities of a Hadoop developer. As I’m new to the corporate world, it’s difficult for me to analyze the work life of a Hadoop developer. So, meeting such professionals would help me to better understand about the work environment, pay and hours to work. This iteration gives an overall snapshot of the software environment and gives me enough confidence to face the corporate world.

Finally, the fourth iteration is preparing for the interview and applying for the developer position. A crucial step of the preparing is resume preparation. I’ll look for the sample resumes that are available in online and base mine on them. I will start writing my own one. Next to this, I will start preparing for the interview process. For this, I’ll revamp my technical skills that are required for role in the organization. Later, I will go for mock interview questions found online, which helps me analyzing the questioning practice in real time interviews. Apart from technical skills, I would also concentrate on the presentations and discussions as some organizations have included presentations and group discussions as a part of the interview. Last but not the least; I would also like to concentrate on my oral skills as it has a great impact for any role in any organization. This iteration helps me in applying for the Hadoop developer positions through various job portals like indeed.com, monsterjobs.com, and simplyhired.com. Once I receive the confirmation mail or call from the organization to attend the interview, I’ll prepare myself to give the best shot to start my dream career.

Actionresearch_Assignmentinstructions/Sample papers/3-Iteration 1 x
Iteration 1: Research on Hadoop Programming

Plan:

After briefly analysing my proposal on June 8th, 2016, I came to know that smart work and analytical thinking are required to get a job as a Hadoop developer. My initial plan was to research on Hadoop with the help of internet and referring articles. Finding different journal articles, books, and magazines helped to understand the Hadoop developer and its roles and responsibilities. As a part of my research, I examined the Hadoop developer pay scale and its position in the market.
Primarily, the research was scheduled for 3 days and it was an hour session per day. The research of books and journals took about a day and the time taken for internet research and articles was about two hours. I have studied the Hadoop for dummies book to get the basic knowledge about Hadoop. I used google.com as my search engine. On June 9th, I searched the following:
1. Skills required for Hadoop developer
2. Career growth in Hadoop
3. Average salary for Hadoop
4. Future scope in Hadoop
The search resulted in many web pages, of which I selected just a few. I analyzed and understood these processes to get the best results to know about Hadoop. Later, I filtered the processes and found the best process which met my requirement. Thus, the online research and research on books and journals helps me in understanding the Hadoop developer’s role and also helped me to gain knowledge in Hadoop programming.

Action:

As a part of my plan, the research took three complete days. It took two complete days to review books and journals just to know the future scope as a Hadoop developer. The articles about Hadoop showed me its current position in market and also the roles and responsibilities of a Hadoop developer. Secondly, I searched google.com about the Hadoop developer and it directed me to the below link.

https://www.google.com/search?q=Skills+required+for+hadoop+developer&oq=Skills+required+for+hadoop+developer&aqs=chrome..69i57.16755j0j8&sourceid=chrome&es_sm=93&ie=UTF-8
the search resulted in 399,000 entries. Since this was too many entities to review, I just filtered these results according to my requirement and analyzed the best processes for my research. The results of my search gave me an idea of what is needed to get trained in Hadoop.
I reviewed these websites and discovered that programming knowledge, self-assessment, and smart work are required for a person to become a Hadoop developer. All these websites suggested learning java to get the programming skills to become a Hadoop developer. These websites showed me the use of Hadoop for “big data” analytics, which is one of the hottest fields in information system today. Is Hadoop Now Easy to Use? Clearly explained the need for Support to keep systems up and running for mission critical clusters. I feel that the basic understanding of distributed systems, file system design and java knowledge are required to become a java developer.

Observation:

The websites and articles that I have selected were good and helped me by giving useful suggestions in getting a job. The primary and basic observations were the roles and responsibilities of a Hadoop developer. The articles that helped me in getting information about Hadoop are “TOM’S IT Pro real-world business technology”. Among the websites that I have filtered, I felt one website is very useful for me: Prerequisites for Learning Hadoop.

These websites gave me a clear idea of what is Hadoop and the skills required for being a Hadoop developer.These articles and websites have answered all of my questions about Hadoop like:
· How is the career growth in Hadoop?
· What are the prerequisites for Hadoop?
· Is Java and SQL required to learn Hadoop?
· What are the best ways to learn Hadoop?
· From where should I begin training in Hadoop?
From the questions answered by SAP HANA staff, I observed that I have to put some efforts to get trained according to the skills that are required. I know the best way to learn Hadoop after getting the answers to these questions. I have analyzed and understood my strengths and weakness to improve my skills so that I am able to know how good I am in a particular language and how well I have to get trained in clearing the interview.
The third observation was on the managerial requirements for Hadoop that typically enables the basic concepts of Hadoop. Although Hadoop does not require the models, most of the organizations prefer the agile method, as it doesn’t have any specified process.
Finally, I have observed that the technical requirement for Hadoop is java and SQL. It requires both the core and advanced java programming to learn Hadoop. It also requires almost all the concepts SQL like queries, triggers, and cursors.

Reflection:

After going through the plan, action and observation, this iteration was very successful in achieving my goals and it is completed as per the scheduled time. Performing the actions on time was very important to get a job. Getting a job is not an easy task. Critical thinking, self-assessment, and smart working are required. I have to practice how to talk with a bit of luck to face the interview. I consider that I am the smart employee, and I have vital questioning to resolve the troubles effortlessly. I experience that in my first generation, I gave a plan, and I used to get in line with the project.
With the previous background of bachelors in computer science, I have basic knowledge on java and SQL. Because of my inexperience, I am worried about getting a job as a Hadoop developer. However, online research helped me discover that there is no experience required for a Hadoop developer. As I am a fast learner, I can able to learn concepts of Hadoop very quickly.
The first iteration concluded with the jobs and obligations of a Hadoop developer, variousmanagerial abilities (like agile approach), skills required for a Hadoop developer, career boom inHadoop, prerequisites for Hadoop developer and also the technical necessities like Java andsquare, which are vital earlier than getting trained in Hadoop.

Actionresearch_Assignmentinstructions/Sample papers/4-Final paper x

Opportunities As An Hadoop Developer

1
IDENTIFYING OPPORTUNITIES AS AN HADOOP DEVELOPER

TABLE OF CONTENTS
Introduction ……………………………………………………………………………………… 3
Methodology 5
Action Research: 5
Purpose 6
Characteristics of Action Research: 7
History of Hadoop: 10
Benefits of Hadoop: 11
Responsibilities of a Hadoop Developer: 12
Scope of Hadoop: 12
Iteration 1: Research on Hadoop Programming 15
Plan 15
Action 16
Observation 17
Reflection: 18
Iteration Two: Started training in Hadoop plan, after selecting an Institute 19
Plan 19
Action 20
Observation 21
Reflection 23
Iteration Three: For better understanding of job landscape meeting with Hadoop professionals and preparing Resume 24
Plan 24
Action 25
Observation 26
Reflection 27
Iteration 4: Preparing for the interview and Apply for Hadoop Developer Position and Attending the Interview 28
Plan 28
Action 29
Observation 29
Reflection 30
CONCLUSION 31

Introduction

In this modern era, new technologies are emerging and there are always upgrades occurring. Many of these technologies are used in different fields like robotics, aerospace engineering, and Nano technology. Human race has progressed to a greater extent with the use of technologies in various fields.
Data storage and management have gained primary importance recently. Enormous amounts of data are being used and stored by organizations.Putting away and dealing with this immense measure of information has dependably been testing a superior world despite the fact that numerous related innovations have been taking off as of late. The term “Big data” has emerged and Hadoop technology uses a set of algorithms to process large clusters of data.
This project involves getting a job opportunity as a HADOOP developer so that I can learn many things about the company – what it is, where is comes from, how it can be applied to business processes, and how to get started using it.
With this internship, I would like to do the research job opportunities at Hadoop. The primary purpose for choosing this is for the ongoing latest technology in the software industry. My undergraduate is in computer science and I gained some knowledge on SQL and databases. However, while completing my masters, I have gained good knowledge about big data, and where its types can be uniquely stored and processed in Hadoop. I believe that social media data should definitely need Hadoop for their unlimited competition and real-time decisions that include market share.
Hadoop has its roots at Yahoo!, whose internet web search tool business consists of constant preparing of a lot of Web page information. Eric Baldeschwieler challenged Owen O’Malley, co-founder of Horton works to solve a hard problem: store and process the information on the web in a basic, adaptable and financially plausible way. They took a gander at customary storage approaches; however, they immediately figured out that they simply weren’t going to work for the sort of information (a lot of it unstructured) and the sheer amount Yahoo! would need to manage (Baldeschweiler, 2013). It is a data management platform as it offers lower-cost storage framework and an open source development (Yuhanna, 2014). The research suggests that the broad enterprise can embrace and use Hadoop to create big data value (Business Value of Hadoop, 2013).
Hadoop is turned into a foundational engineering at Yahoo and it fundamentals an extensive variety of business-basic applications. Organizations in about every vertical began to embrace Hadoop. By 2010, the group had contracted a large number of clients and a wide undertaking energy had been created (Baldeschweiler, 2013).
While each association is different, their big data are frequently fundamentally the same. Hadoop, as a discriminating bit of emerging modern data architecture, is gathering enormous measures of data over social media activity, click stream data, web logs, financial transactions, features, and machine/sensor data from gear in the field (Business Value of Hadoop, 2013).
According to Baldeschwieler (2013), Hadoop is a system for adaptable and reliable dispersed information stockpiling and transforming. It takes into consideration the handling of vast information sets crosswise over groups of machines utilizing a basic programming model. It is intended to scale up from single servers to a large number of machines, amassing the nearby processing and stockpiling from every server.

Methodology

Action Research:

Action research is a methodology of methodical request that empowers individuals to discover viable solutions to genuine issues experienced in everyday life. Action research has had a long and recognized family that compasses in excess of 50 years over a few continents. Generally, the term action research has been since a long time ago related with the work of Kurt Lewin, who viewed this research methodology as cyclical, element, and shared in nature. Through rehashed cycles of arranging, watching, and reflecting, people and gatherings occupied with activity research can actualize changes needed for social change (Lavery, 2014).
According to Corey (1953), activity research is the methodology by which practitioners endeavor to study their issues experimentally in order to guide, correct, and assess their activities and choices. Understanding action research includes recognizing how the reactions to these struggles helped create new methodologies to and understanding of substantive changes over the long time and across physical, social, and emotional boundaries (Glassman, 2014).
The distinction between action research and other type of research is that during the process, researchers will need to create and utilize a scope of abilities to attain their points, such as careful planning, sharpened perception and listening, assessment, and critical reflection. The traditional research is conducted to report and publish conclusions that can be generalized to larger populations, whereas the action research is conducted to take action and effect a positive change in the environment that was identified. The traditional research can be done in the environment where can be controlled, but, the action research should be done in school and classrooms (Reason, 2008).

Purpose:

The reason for the exploration was twofold. In the first place, it was to investigate how teachers see and interpret affirmation gathered through action research ventures drove inside the school situations. Also, it was to explore how these understandings could be used to teach master hone inside the schools(Lavery, 2014).

Stages of Action Research: The process of action research is just ineffectively portrayed in terms of a mechanical sequence of steps. According to McTaggart (2007), it is generally thought to include a spiral of insightful toward self-reflective cycles of the following:
· Planning a change
· Acting and observing the process and consequences of the change
· Reflecting on these processes and consequences
· Replanning
· Acting and observing again
· Reflecting again, and so on.

Initial Reflection:Action research emerges from an issue, predicament, or uncertainty in the circumstances in which professionals end up. It might be at the level of a general concern, an apparent need, or a course-related issue (Lewin, 1952).

Planning:The most essential outcome of the planning stage is a definite arrangement of the activity that is intended to take or to make the changes (Lewin, 1952).

Action:In the light of experience and feedback, the minor deviations from the plan and record have the deviations with the reasons behind them. Moreover, the new insights can be incorporated in the current project or can be recorded for the future project.

Observation:Detailed observation, checking, and recording empowers us to survey the impact of the action and hence the adequacy of the proposed change. It is better to maintain a dairy or journal to record the observations and insights of the project (Lewin, 1952).

Reflection: At the end of every cycle, it is important to reflect the observations that has made in the dairies or journals.

Figure 1: Integration of two organizational schemes for the step-by-step process of action research. Source: (Reason, 2008).

Characteristics of Action Research:
According to Schuler (1996), the components of action research are of five C’s. They are:
· Commitment:The factors that all the participants who are involved in the project. The participants need some time to trust each other and to observe the practice, changes, approaches, documents, reflects and finally the results.
· Collaboration: In action research, the power relations among participants are same; every individual contributes and has a stake. Collaboration is not the same as compromise; however, it includes a cyclical procedure of imparting, giving and taking.
· Concern:The concern of action research means that all participants will improve a group of critical friends.
· Consideration:Reflective practice is the careful audit of one’s expert actions. Reflection obliges concentration and careful considerations as one looks for examples and connections that will produce significance within the investigation. Reflection is a challenging, focus and discriminating appraisal of one’s own conduct as a method for one’s craftsmanship.
· Change:The change is difficult and it is an important element in remaining effective as a human.
In my opinion, action research is a form of investigation designed for the projects to get the employment opportunities as a HADOOP developer. It is a successful tool to get a job in Hadoop. This action research gave me the idea to split the whole process into small iterations. These iterations include the plan, action, observation, and reflection of action research in the particular iteration. This iteration process helped me work efficiently to achieve my targets. It gave some knowledge and skills about the Hadoop and showed me the best way to gain employment.

Literature Review:

Hadoop is open source reliable and scalable distributed computing platform that stores and process the data. It includes a fault-tolerant storage system known as Hadoop distributed file system (HDFS). HDFS is capable to store large amounts of data, grow incrementally, and survive the failure of major parts of the storage infrastructure without trailing data (What Is Hadoop?, 2008).
Hadoop influences a cluster of hubs to run Mapreduce programs enormously in parallel. A Mapreduce project comprises of two steps: the Map step methodologies information and the Reduce step assembles intermediate results into a last result. Each single cluster node has a neighborhood record framework and nearby CPU on which to run the Mapreduce programs. Information is broken into information pieces, stored over the local records of distinctive hubs, and imitated for unwavering quality. The nearby records constitute the record framework called Hadoop Distributed File System (HDFS). The number of hubs in each one group differs from hundreds to a huge number of machines. Hadoop can likewise consider a certain set of fail-over situations (Lay, 2010).
Hadoop has developed into the system of decision for architects analyzing enormous information in field such as money, promoting, and bioinformatics (Zaharia, n.d.). At the same time, the changing nature of information itself, along with a yearning for speedier criticism, has started interest for new methodologies, counting devices that can convey specially appointed, constant transforming, and the capacity to parse the interconnected information flooding out of social communities and versatile devices (Mone, 2013).
As Hadoop is a distinct tool, it is aimed at problems that require assessment of all accessible data. For instance, image processing and text analysis usually mandate that every single record be examined, and often infer in the perspective of similar records. Hadoop uses a procedure called Map Reduce to hold out this comprehensive analysis quickly (What Is Hadoop?, 2008)

History of Hadoop:

In 2002, when project Nutch began as an open source web crawler by Apache Foundation, a working crawler and inquiry framework immediately rose. Doug Cutting, the inventor of Apache Lucene, assessed that a framework supporting a billion file would cost around $30,000. They accepted that it would open up and democratized web crawler calculations. Yet soon they understood that their building design would not scale to billions of pages on the Web (Baldeschweiler, 2013).
Google had additionally confronted the same issue of dealing with billions of website page lists and they made an innovation to defeat this test. In 2004, Google distributed an alternate paper that presented MapReduce, a parallel programming model focused around utilitarian programming to process conveyed information. In 2005, the Nutch designer likewise made a working MapReduce usage in Nutch. All the major Nutch calculations had been ported to run utilizing MapReduce and Nutch distributed file system (NDFS) (Baldeschweiler, 2013).
NDFS and MapReduce were exceptionally guaranteeing innovations and turn out as an Apache autonomous sub project of Lucene task called Hadoop. In the meantime Doug Cutting offered by Yahoo!, which gave a devoted group and assets to transform Hadoop into a system that ran at web scale. In Jan, 2008, Hadoop graduated to a Top-Level Apache venture, affirming its prosperity. Presently Hadoop is being utilized and upheld by numerous different organizations other than Yahoo, such as Facebook, The New York Times, Cloudera, Hortonworks, and Last.fm (Mone, 2013).

Benefits of Hadoop:

1. Cost-effective: Apache Hadoop controls costs by putting away information more reasonably every terabyte than different stages. Rather than thousands too many thousands every terabyte, Hadoop conveys register and capacity for several dollars every terabyte.
2. Fault-tolerant: Fault tolerance is a standout amongst the most critical points of interest of utilizing Hadoop. Regardless of the possibility that individual hubs encounter high rates of disappointment when running employments on an extensive group, information is duplicated over a bunch so it can be recouped effectively even with circle, hub or rack disappointments.
3. Adaptable: The adaptable way that information is put away in Apache Hadoop is one of its greatest resources – empowering organizations to produce esteem from information that was already considered excessively extravagant to be put away and handled in conventional databases. With Hadoop, one can utilize different varieties of information, both organized and unstructured, to concentrate more important business experiences from a greater amount of your information.
4. Scalable: Hadoop is an exceedingly versatile storage platform, due to the fact that it can store and convey substantial information sets across clusters of many reasonable servers working in parallel (Business Value of Hadoop, 2013).

Responsibilities of a Hadoop Developer:

A Hadoop developer is responsible for the real coding or programming of Hadoop applications. This part is like that of a software developer. The following are some of the responsibilities a Hadoop developer:
· Hadoop advancement and usage
· Stacking from dissimilar information sets
· Pre-processing utilizing Hive and Pig
· Planning, building, introducing, designing and supporting Hadoop
· Interpret complex utilitarian and specialized prerequisites into definite configuration
· Perform investigation of incomprehensible information stores and reveal experiences
· Keep up security and information protection
· Make versatile and elite web administrations for information following
· Overseeing and conveying Hbase
· Test models and regulate handover to operational groups
· Propose best practices/models (Gothai, 2014).

Scope of Hadoop:

The Hadoop platform has tools that can remove the information from the source frameworks, whether they are log records, machine information or online databases and burden them to Hadoop in record time. It is conceivable to do changes on the fly as well, although more expound handling is better done after the information is stacked into Hadoop. Programming and scripting systems permit complex ETL employments to be sent and executed in a disseminated way. Quick enhancements in intelligent SQL tools settle on Hadoop a perfect decision for a minimal effort data warehouse (Yuhanna, 2014).

Proposal:

The proposal of my paper is completely dependent on the procedures that I am going to apply to get an employment as Hadoop developer. Finding a job in a good company is not an easy task as it requires a lot of hard work and technical skills to get into the work. This paper provides the action research cycles as it is a step-by-step process in finding a job. A Hadoop developer is a very high-level job, so I have to work hard to get an employment in it. This paper also includes iterations, which is the process to get an employment opportunity in finding a job.
The first iteration requires doing research on Hadoop programming. I would research Hadoop through online websites, journals and books, which gives me the basic idea on Hadoop and its position in the market. While doing this research, I’ll also gather basic information of roles and responsibilities of Hadoop developer. This completes my first step and pushes me little ahead towards my goal.
The second iteration involves selecting an institute and start training. In this iteration, as a part of my action research, I will select one of the best institutes to get trained in Hadoop. My skill set in java is limited and Hadoop is based on java framework. So, I would like to get trained in Java first and then continue to get trained in Hadoop. In this process of searching for an institute, I would like to inquire whether the professor would be able to teach me both Java and Hadoop and also know whether this training would be just theoretical or will it include even practical sessions. If all the above mentioned requirements are covered and I’m satisfied, I would go ahead with the training process. This iteration helps me in having a clear understanding of Hadoop programs and gear up myself for the interviews.
For the third iteration, I would meet with Hadoop professionals to better understand the work culture and some of the roles and responsibilities of a Hadoop developer. As I’m new to the corporate world, it’s difficult for me to analyze the work life of a Hadoop developer. So, meeting such professionals would help me to better understand about the work environment, pay and hours to work. This iteration gives an overall snapshot of the software environment and gives me enough confidence to face the corporate world.

Finally, the fourth iteration is preparing for the interview and applying for the developer position. A crucial step of the preparing is resume preparation. I’ll look for the sample resumes that are available in online and base mine on them. I will start writing my own one. Next to this, I will start preparing for the interview process. For this, I’ll revamp my technical skills that are required for role in the organization. Later, I will go for mock interview questions found online, which helps me analyzing the questioning practice in real time interviews. Apart from technical skills, I would also concentrate on the presentations and discussions as some organizations have included presentations and group discussions as a part of the interview. Last but not the least; I would also like to concentrate on my oral skills as it has a great impact for any role in any organization. This iteration helps me in applying for the Hadoop developer positions through various job portals like indeed.com, monsterjobs.com, and simplyhired.com. Once I receive the confirmation mail or call from the organization to attend the interview, I’ll prepare myself to give the best shot to start my dream career.

Iteration 1: Research on Hadoop Programming

Plan:

After briefly analysing my proposal on June 8th, 2016, I came to know that smart work and analytical thinking are required to get a job as a Hadoop developer. My initial plan was to research on Hadoop with the help of internet and referring articles. Finding different journal articles, books, and magazines helped to understand the Hadoop developer and its roles and responsibilities. As a part of my research, I examined the Hadoop developer pay scale and its position in the market.
Primarily, the research was scheduled for 3 days and it was an hour session per day. The research of books and journals took about a day and the time taken for internet research and articles was about two hours. I have studied the Hadoop for dummies book to get the basic knowledge about Hadoop. I used google.com as my search engine. On June 9th, I searched the following:
1. Skills required for Hadoop developer
2. Career growth in Hadoop
3. Average salary for Hadoop
4. Future scope in Hadoop
The search resulted in many web pages, of which I selected just a few. I analyzed and understood these processes to get the best results to know about Hadoop. Later, I filtered the processes and found the best process which met my requirement. Thus, the online research and research on books and journals helps me in understanding the Hadoop developer’s role and also helped me to gain knowledge in Hadoop programming.

Action:

As a part of my plan, the research took three complete days. It took two complete days to review books and journals just to know the future scope as a Hadoop developer. The articles about Hadoop showed me its current position in market and also the roles and responsibilities of a Hadoop developer. Secondly, I searched google.com about the Hadoop developer and it directed me to the below link.

https://www.google.com/search?q=Skills+required+for+hadoop+developer&oq=Skills+required+for+hadoop+developer&aqs=chrome..69i57.16755j0j8&sourceid=chrome&es_sm=93&ie=UTF-8
the search resulted in 399,000 entries. Since this was too many entities to review, I just filtered these results according to my requirement and analyzed the best processes for my research. The results of my search gave me an idea of what is needed to get trained in Hadoop.
I reviewed these websites and discovered that programming knowledge, self-assessment, and smart work are required for a person to become a Hadoop developer. All these websites suggested learning java to get the programming skills to become a Hadoop developer. These websites showed me the use of Hadoop for “big data” analytics, which is one of the hottest fields in information system today. Is Hadoop Now Easy to Use? Clearly explained the need for Support to keep systems up and running for mission critical clusters. I feel that the basic understanding of distributed systems, file system design and java knowledge are required to become a java developer.

Observation:

The websites and articles that I have selected were good and helped me by giving useful suggestions in getting a job. The primary and basic observations were the roles and responsibilities of a Hadoop developer. The articles that helped me in getting information about Hadoop are “TOM’S IT Pro real-world business technology”. Among the websites that I have filtered, I felt one website is very useful for me: Prerequisites for Learning Hadoop.

These websites gave me a clear idea of what is Hadoop and the skills required for being a Hadoop developer.These articles and websites have answered all of my questions about Hadoop like:
· How is the career growth in Hadoop?
· What are the prerequisites for Hadoop?
· Is Java and SQL required to learn Hadoop?
· What are the best ways to learn Hadoop?
· From where should I begin training in Hadoop?
From the questions answered by SAP HANA staff, I observed that I have to put some efforts to get trained according to the skills that are required. I know the best way to learn Hadoop after getting the answers to these questions. I have analyzed and understood my strengths and weakness to improve my skills so that I am able to know how good I am in a particular language and how well I have to get trained in clearing the interview.
The third observation was on the managerial requirements for Hadoop that typically enables the basic concepts of Hadoop. Although Hadoop does not require the models, most of the organizations prefer the agile method, as it doesn’t have any specified process.
Finally, I have observed that the technical requirement for Hadoop is java and SQL. It requires both the core and advanced java programming to learn Hadoop. It also requires almost all the concepts SQL like queries, triggers, and cursors.

Reflection:

After going through the plan, action and observation, this iteration was very successful in achieving my goals and it is completed as per the scheduled time. Performing the actions on time was very important to get a job. Getting a job is not an easy task. Critical thinking, self-assessment, and smart working are required. I have to practice how to talk with a bit of luck to face the interview. I consider that I am the smart employee, and I have vital questioning to resolve the troubles effortlessly. I experience that in my first generation, I gave a plan, and I used to get in line with the project.
With the previous background of bachelors in computer science, I have basic knowledge on java and SQL. Because of my inexperience, I am worried about getting a job as a Hadoop developer. However, online research helped me discover that there is no experience required for a Hadoop developer. As I am a fast learner, I can able to learn concepts of Hadoop very quickly.
The first iteration concluded with the jobs and obligations of a Hadoop developer, variousmanagerial abilities (like agile approach), skills required for a Hadoop developer, career boom inHadoop, prerequisites for Hadoop developer and also the technical necessities like Java andsquare, which are vital earlier than getting trained in Hadoop.

Iteration Two: Started training in Hadoop plan, after selecting an Institute

Plan:

After coming up with the idea on how to become Hadoop developer, I realised that for improving my skills, I need to focus more on technical knowledge. Firstly I started focusing on skills like JAVA and SQL which are required for becoming Hadoop developer. I have also selected a good training institute which consists of well-equipped lab where Hadoop can be practiced. Before joining the institute, I inquired all the information related to the institute and also learnt about its ranking and certification. The trainer in the institute was very well in clearing all the doubts of the students.
After selecting the institute, training was planned for 60 days with two hours per day. Students from different educational backgrounds were also a part of the training like computer science, electrical, or mechanical. In the classes all the important topics were covered and also I was attending the class regularly. In the training, I acquired more knowledge about the basic concepts of Java, Java programming, SQL, Big data, Hadoop concepts and Hadoop programming.
This is how I selected an institute which provided me with an environment in which I can learn easily and also can think logically. Now the training session began as per the planned schedule. I understood the scope of Hadoop developer role after getting the overall knowledge in theories of Hadoop.

Action:

The institute I selected, consisting of many instructors; three of them taught us Hadoop. Now I was having good knowledge of coding and also got trained in JAVA and SQL first and then in Hadoop.
There was 60 days of training by Mr. Vivek Meshra, Mr. Srikanth Narayan, and Mr. Ambica Ram. The training began with 10 participants in the class, in which 5 of the participants do not have any computer background. During the first 30 days of training, Mr. Vivek Meshra, explained us about the basic concepts and programming of Java. I have gained the complete knowledge about the concepts of Java and the projects of Java. Whenever my instructor gave me few questions to answer, I worked hard to learn the logics by practicing it and reviewing the notes provided by the instructor. After acquiring the training in Java, I wrote Java certification and scored well in it.
Now Mr. Srikanth Narayan was training us for 10 days in SQL. During first week, I learned about the concepts of SQL, which consisted mainly of structures and commands (DDL, DML, and DCL). After discussing queries in the class, he uses to give daily assignments. It was easy for me to write the queries because I found it very interesting. Later in the second week, I acquired the knowledge about SQL server and normalization concepts. In last 20 days, Mr. Ambica Ram has given us the training. In first week of which, I learned the basics of Hadoop like its importance, the main components (HDFS and MapReduce) and its overview. During the second week, I acquired the knowledge about the Hadoop Distributed File System (HDFS). Then in third week, I learned about the abstraction of MapReduce which included word reduces code, its failures, and recoveries. And finally in the last week, the instructor explained the Java Map method, Pig Latin scripts and basics of HiveQL. This is how I got trained in an institute and learned about Hadoop, Java and SQL.

Observation:

As after the completion of second iteration which included the training sessions on the concepts of Java, SQL, Hadoop and the role of a Hadoop developer I observed various things and all those observations has helped to understand the concept of Hadoop more clearly.
My first observation included selection of an institute. This was observed so as to get training in Hadoop. The important thing I noticed is that the institute provide lab facility which consists of all required and updated software’s on computers. I also observed that the institute can become more flexible by providing respective instructors for Java, Sql and Hadoop training. I also observed that the instructors of the institute should have Java certification and should also provide the material for Hadoop.
My second observation focused on the technical skills of Hadoop. The main job of Hadoop developer is to write consistent code and also to acquire good knowledge about database structures and back-end programming. This is how the developer interestingly writes code for Java, SQL and Hadoop.
Third observation was about the training in Hadoop. The training was done in three parts:
1. Java Training: Java language is primarily design to work within a distributed context of programming. As Hadoop stores and allocate the Big Data from distributed computers, it becomes necessary to make use of features those are easily available in Java to carry out the functionality in distributed environment. Java was selected because of high performance, platform independency, object oriented coding and availability of vast library to accomplish common tasks. Java training required the use of java virtual machine and runtime to execute the byte code of programs. Also the code lines are tested in distributed network so check the valid functionality of code.

Figure 1: Symbol of Java Programming

2. SQL training: Besides the coding part, Hadoop also required the study of SQL because Hadoop is mainly designed to meet the requirements associated with the storage and allocation of the Big Data in distributed system. SQL training was based on the MySql framework which is highly supported by most of the programming languages and hardware. SQL training was beneficiary to accomplish the task on database from the local terminals and to add functionality in java programs to work within Hadoop. Theoretical and practical knowledge of SQL was gained and used in training.

Figure 2: MySQL by Oracle

3. Hadoop training: After the understanding with Java and SQL code, it was time to hand on Hadoop training. Practices and theatrical learning was made on topics HDFS, Map reduce and Pig Latin script in order to understand the working and significance of Hadoop. Also some real cases are considered to make the understanding more easily. Hadoop is used by me to ensure the smooth management of Big Data in a distributed network consisting efficient computers to illustrate the working. Java code and SQL queries are assembled to sharpen the skills on working with multiple languages in single program.

Figure 3: Hadoop framework
At last from this observation, I have observed that as a Hadoop developer it is very important to have a good knowledge about Sql, Java and also the knowledge of programming is very important.

Reflection

I have successfully completed the second iteration by gaining the knowledge required for Hadoop developer. It all began with selection of an institute, then it was necessary to follow planned schedule, interaction in training sessions and submission of regular assignments. However, the institute provides the Java certification which will help me to get a good job. It was very easy for me to learn the concepts of Java and Sql as I was very interested and curious to learn about them. However, I expected to be more practical on Hadoop but the fewer tools were provided for Hadoop developer.
Finally, the second iteration was completed successfully by gaining the overall knowledge of Hadoop, practical requirements like Java, SQL and Hadoop and also I will receive Java certification and also I acquired the knowledge about tools like pig, hive which helped me to gain understanding on hierarchy levels.

Iteration Three: For better understanding of job landscape meeting with Hadoop professionals and preparing Resume

Plan:

After getting the required knowledge, I was going to meet the Hadoop professionals to know more about the software industry. I searched for professionals on social networking sites like Facebook and LinkedIn and I found that many software professionals were sharing their knowledge and experience about how to become better by improving skills and body language. Just by following them and contacting them, I came to know about many things like better understanding of job market and also what skills are required to become a Hadoop developer.
To get a good job it was required that employer must know about the technical skills that I pursue. Therefore, I prepared resume and applied for the interviews. It took me four days to complete my resume. I prepared resume very efficiently describing about my activities and skills like technical and managerial skills and also mentioning about my qualifications. I presented my resume in proper format and as per the requirement of the company. I also included my interests and experiences in the resume. This is how I learned managerial skills from the experience of the software professionals. I have followed this plan so as to get more knowledge and experience about the work done by Hadoop developer and also to prepare resume.

Action:

Just after following the software professionals on Facebook and LinkedIn, I requested them to share their experiences. I came to know about the salary structure and responsibilities of a Hadoop professional. Also after talking to Hadoop professional, I have clear understanding about the communication between superior and subordinate.
Now as it took me four days to complete my resume, in the first day I collected sample resumes from the Internet to apply for a job. Mainly I focused on the format of resume and listed about my various strengths and weaknesses. The strengths and weakness consisted of technical and managerial skills.
Then second day, I considered writing the resume which consists of information about career objective, weakness and strength according to the requirement of job. I highlighted my resume by writing career objective as the first point as it included about the actual objective and about the position available in the particular organization.
On the third day, I focused on technical and communication skill as per the requirement of the company. I have also included my achievements in the resume. I was having six months experience for working in a mini project; I included this as a major point. The mini project consisted of optimization of Wireless LAN for Long wall Coal Mine Automation, which was a Hadoop project. I also included project title, description, duration, and software, so that the employer clearly understands about my experience as a Hadoop developer.
Finally, I checked for any grammatical error and also reviewed my resume thoroughly so that it does not contain any mistake. Though the mistakes were fixed by me, still to be surer I mailed my resume to Mr. Ram Movva, an experienced Hadoop developer. He checked the resume and corrected the mistake if it was noticed and sent the resume back to me. Now I was more confident with my resume.

Observation:

From this iteration, I have observed many things for example, I got to know about the experience of Hadoop developer and also how to prepare professional resume.
My first observation was regarding the role of Hadoop developer’s. My thinking was changed after meeting the Hadoop professionals. Previously I thought that there is no difference between Hadoop project and Java project and also the income for both is same. Now I have learned that there is much difference between Hadoop project and Java project and also Hadoop projects are paid more than the Java projects.
My second observation focused on my communication skills in the office. After contacting the Hadoop professionals, I observed that there should be good communication between colleagues to understand the project more clearly.
My third observation was based on the format of resume. By collecting the sample resumes from the internet, I observed important contents that are needed to be included in the resume. The fourth observation focused on career objective and the weaknesses. I included these contents in my resume and I started analyzing about my strengths, weakness, and career objectives.
Then my fifth observation was related to correction of errors on the resume. As I have fixed most of the errors on the resume still to make it perfect I have passed it to the experienced person and I learned that there were grammatical errors which were not corrected by me.
Finally, I observed that while preparing resume, I got to learn much about myself by writing about my skills, strengths and weaknesses and experience to the employer. This is how I prepared my resume robustly with no errors and also forwarded it to the person with more experience to recheck it and to correct my mistakes.

Reflection:

Through meeting the talented and experienced persons in Hadoop and designing the suitable resume by which the iteration was successfully completed. Get in contact with Hadoop professionals, sample resumes, solution for the errors assist me about the importance of showing the skills to the employer. Due to my little or less experience, I have knowledge of the organization environment. After talking to the professional people, I get the knowledge of many things related to Hadoop and most important is communication skill.
I got the six-month experience in the mini project and I clearly learnt the importance of team work in the organization during the project work. The concerned employer knows the importance of experience in Hadoop and how I will meet the requirements for the job after showing my resume to the employer. Now I realized the value and purpose of resume in the organizational culture. The well designed resume boasts my confidence as it will accept by all the recruiters. By mentioning the personal experience in the resume, I was able to mention the standard description of myself.
Reviewing the resume was just important as sending the resume to the talented person.

Iteration 4: Preparing for the interview and Apply for Hadoop Developer Position and Attending the Interview

Plan:

After designing the perfect resume, the next process was to prepare for the interview and apply for the Hadoop developer position. So, I did good perpetration for the Interview. This interview is the first platform for me to provide a general idea that how I can work and knowledge I have about the Organization.
My first task that I have to classify the most ordinary questions asked during the interview. Then in second phase I have to focus on my calibre and need to review the technical knowledge that I got under training. The third plan is to improve the communication skills. In fourth plan I ask the seniors or talented persons to carry out the mock with me. In fifth plan I have to check the errors which I have made during whole process.
The next plan is to find the vacant developer position at Hadoop through monster.com, careerjobs.com and indeed.com. The last plan is to apply for the jobs that are identified on job searching websites. If interviewers select my resume, then organization may take the aptitude test. Many organizations do not take the aptitude test. In next procedure if have clear the aptitude test then I have to give the interview and may be face to face interview or telephonic.
In the last stage, I have prepared for the interview that contains the questions about the problem solving and technical skills. Then, I have to apply for the Hadoop developer position through website and give the interview with full enthusiasm.

Action:

As per the plan, I have prepared well for the interview. Through internet, I searched for the general questions for the interview and reviewed the technical skills that I gained during the training for 2nd iteration. Technical skills included in the resume. I prepared a lot to improve the communication skills and learning the presentation on Hadoop this all things assist me to boast up my confidence.
After this I looked for the jobs in the organization for the developer position through internet facility. Then I select the organizations which have the vacant seats, repudiated business etc. Then I uploaded the resume on the website of the organization.
At last I received the call for the interview. The interview was face to face and location of organization is too far from me. After interview I got the call from the interviewer that I have been selected for the job as a developer in Hadoop.

Observation:

I have observed that I am confident enough to present my skills and knowledge to others after implementing the plans, preparing for interviews and applying for a job. But I have also observed that I am not so good in academic skills. I need to take some initiatives in order to improve my academic skills. By improving my academic skills, I will become perfect to serve my skills to big reputed company.
Previously I was not having enough ideas about the qualities required for impressing the interviewer in the interview. After going through a thorough observation and preparation for the interview I felt more confident to face the interview. I noticed that to crack the interviews successfully a tight competition is required. During interviews, I noticed that among all the candidates I was far enough better from them in term of skills and qualifications. The mock interviews really helped me a lot in facing confidently the interview with more positive energy.
I made a deep observation about the company’s profile and its requirements. After observation, I found that the company’s profile and its requirements totally matched with my skills. I also observed that after joining this particular company I can build my successful career. I noticed that this company will enhance and develop my skills which would help in brightening by career for the future. I observed after getting shortlisted for the interview that a full focus and concentration is required while attaining the interviews.

Reflection:

This concentration and iteration helps me a lot in preparing for the interview and cracking successfully the interview in a big reputed company. This helped me in identifying and grouping all my strengths and weaknesses which further helped me in converting my negative points into positive one. This makes me complete and all-rounder. The proper guidance of my seniors and mock interviews help and shown me the right direction in preparing myself for the interview.
Growth of my career and future was going to be fully influenced by the selection process made by the company. I found a very good job opportunity and I have also taken a proper and additional care in matching the requirements of the company with my skill and knowledge. I was very confident for getting a call for the interview from the company because I have taken a full care and concentration on improving my practical skills. Finally, I got a call for interview from the company and I give my interview with full confident. I confidently answered to all the questions asked in the interview. And finally, I got selected by the company and I accepted the position of Hadoop developer in a company.

CONCLUSION
This research project was victorious with a knowledge gaining experience. Of 15 weeks research, I gained the knowledge of about getting a job successfully. I found that in order to crack the interviews successfully smart working is must require. From this research, I have learned the process of discovering the required job, its processes and the action research needed for getting the job. I had also learnt how to divide the work into small segments and to work collectively with those segments of work.
In this report, the action research has been divided into four iterations by me and I have also started working on these. These iterations had helped me a lot in searching the required employment, successful working on mini projects, preparing the resume and for the interview as well, meeting the Hadoop professionals, applying the company and attaining the interview successfully. The Hadoop professionals helped me in preparing the eye catching and good resume and also in preparing myself for the interviews. This iteration helped me in submitting my resume in the market, attending successfully the interview and finally achieving the job as a Hadoop developer in a bid reputed company.

References

Business Value of Hadoop. (2013, June). Retrieved from Hortonworks Inc.: http://hortonworks.com/wpcontent/uploads/2014/05/Hortonworks.BusinessValueofHadoop.v1.0
Corey, S. (1953). Action research. In Action research to improve school applications. New York: Bureau of publications.
Dumbill, E. (2012, January 19). Volume, Velocity, Variety: What You Need to Know About Big Data. Retrieved from Forbes: http://www.forbes.com/sites/oreillymedia/2012/01/19/volume-velocity-variety-what-you-need-to-know-about-big-data/
Baldeschwieler, E. O. O. (2013, may). Apache hadoop basics. Retrieved from Horton works: http://hortonworks.com/wp-content/uploads/downloads/2013/07/Hortonworks.ApacheHadoopBasics.v1.0
Freeman, W. (2002, Febraury). Business Resumption Planning: A Progressive Approach. Retrieved from SANS Institute: http://www.sans.org/reading-room/whitepapers/recovery/business-resumption-planning-progressive-approach-562
Glassman, M. (2014). Participatory action research and its meanings: Vivencia, praxis, conscientization. Adult Education Quarterly, 206-221.
Gothai, E. B. P (2014). A Novel approach for partitioning in Hadoop. Journal of Theoretical and Applied Information Technology, 537-541.
Lavery, G. S. (2014). Action research: Informing professional practice. Issues in EducationalResearch, 162-173.
Lay, P. (2010, november). Leveraging Massively Parallel Processing in an oracle environment for big data analytics. Retrieved from An oracle white paper: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-hadoop-oracle-194542
Lewin. (1952). Stages of an Action Research Project. 462-463.
McTaggart, S. K. (2007). Participatory action research. Communicative action and public sphere, 271-327.
Mone, G. (2013). Beyond Hadoop. Communications of the ACM, 22-24.
Reason, P. A. (2008). Participative inquiry and practice. In The SAGE Handbook of Action Research. london: SAGE.
Rosenthal, D. P. (n.d.). Business Resumption Planning. Retrieved from The business forum: http://www.bizforum.org/whitepapers/calstatela.htm
Schuler, B. E. (1996). Action research in early childhood education. 3.
What Is Hadoop? (2008, March). Retrieved from cloudera: http://www.hurence.com/sites/default/files/What%20is%20Hadoop
Yuhanna, M. G. (2014). The Forrester Wave™: Big Data Hadoop Solutions, Q1 2014. For Application Development & Delivery Professionals, 15.
Zaharia, M. (n.d.). Introduction to Mapreduce and Hadoop. Retrieved from RAD Lab: http://www.cs.berkeley.edu/~matei/talks/2010/amp_mapreduce

Calculator

Calculate the price of your paper

Total price:$26
Our features

We've got everything to become your favourite writing service

Need a better grade?
We've got you covered.

Order your paper