Learn Geek languages like Big data,Hadoop,Hive,Pig,Sqoop ,flume,Cassandra,Hbase,Ruby On Rails,Python,Java and many more.

Tuesday 25 October 2016

What is Apache Hive?
Apache Hive is a Data warehouse system which is built to work on Hadoop. It is used to querying and managing large datasets residing in distributed storage. Before becoming a open source project of Apache Hadoop, Hive was originated in Facebook. It provides a mechanism to project structure onto the data in Hadoop and to query that data using a SQL-like language called HiveQL.

What is HQL?
Hive defines a simple SQL-like query language to querying and managing large datasets called Hive-QL ( HQL ). It’s easy to use if you’re familiar with SQL Language. Hive allows programmers who are familiar with the language to write the custom MapReduce framework to perform more sophisticated analysis.

Uses of Hive:

1. The Apache Hive distributed storage.
2. Hive provides tools to enable easy data extract/transform/load (ETL).
3. It provides the structure on a variety of data formats.

Data Definition Language (DDL )

DDL statements are used to build and modify the tables and other objects in the database.
Example :CREATE, DROP, TRUNCATE, ALTER, SHOW, DESCRIBE Statements.

Data Manipulation Language (DML )

DML statements are used to retrieve, store, modify, delete, insert and update data in the database.
Example :LOAD, INSERT Statements.

Saturday 22 October 2016

Steps to install apache Pig

1. Download the tar file apache pig
2. Extract the tar file by

$ tar xvzf pig-0.15.0.tar.gz

3. Set the path of pig in bashrc file
4. Open the bashrc file by

$ sudo gedit .bashrc

5.paste these export lines in bashrc file at bottom
export PIG_HOME=/home/ratul/pig-0.15.0
export PATH=$PATH:/home/ratul/pig-0.15.0/bin
export PIG_CLASSPATH=$HADOOP_HOME/conf
6.run the command on terminal
for local mode

$ pig -x local

for mapreduce mode

$ pig -x mapreduce

Thursday 20 October 2016

 Steps to install Hive Database


1.Download the tar file of Hive

2. Extract file of hive
$ tar xvzf apache-hive-1.2.1-bin

3. go to  bashrc file by 
  $ gedit .bashrc
  
paste below lines in bashrc file

# Set HIVE_HOME
export HIVE_HOME=/home/ratul/apache-hive-1.2.1-bin
export PATH=$PATH:$HIVE_HOME/bin
4. Go to bin folder of hive
$ cd  home/ratul/apache-hive-1.2.1-bin/bin
5. edit the hive-config.sh file

In  hive-config.sh add
export HADOOP_HOME=/home/ratul/hadoop-2.6.0

6. before run hive,first run hadoop server than run hive
$ hive

Wednesday 19 October 2016

Steps to Install hadoop on  ubuntu


1. Install Jdk 1.6 or greater here.
     to install jdk write 
 $ sudo apt-get install openjdk-8-jdk

2. Download the required hadoop .

3. Extract by tar xvzf hadoop-2.6.0.tar.gz

4. update the JAVA_HOME inside the hadoop-env.sh file.
--write
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-i386

5. Update your bashrc file by 

    $sudo gedit .bashrc

paste this export lines at the end of file

export HADOOP_HOME=/home/ratul/hadoop-2.6.0
export HADOOP_CONF_DIR=/home/ratul/hadoop-2.6.0/etc/hadoop
export HADOOP_MAPRED_HOME=/home/ratul/hadoop-2.6.0
export HADOOP_COMMON_HOME=/home/ratul/hadoop-2.6.0
export HADOOP_HDFS_HOME=/home/ratul/hadoop-2.6.0
export YARN_HOME=/home/ratul/hadoop-2.6.0

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-i386
export PATH=$PATH:/home/ratul/hadoop-2.6.0/bin

export HADOOP_USER_CLASSPATH_FIRST=true

6. modify your core-site.xml hdfs-site.xml and mapred-site.xml.

7. Install ssh on your system using sudo apt-get install ssh.

8. ssh localhost should log you in.

9. run the below two commands to save the auth keys.
       $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
       $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

10. now your system is setup and installed with hadoop, format your namenode by

     $ hadoop namenode -format

11. To run your namenode,datanode,secondarynamenode,jobtracker and tasktracker.

     $ cd home/ratul/hadoop-2.6.0/sbin
     $./start-all.sh

12. You can view the namenode http://localhost:50070

13. You can view the cluster at http://localhost:8088

14. You can interact with hdfs using hadoop fs -ls /

Apache Hadoop

hadoop

Apache Hadoop is, an open-source software framework, written in Java, by Doug Cutting and Michael J. Cafarella, that supports data-intensive distributed licensed under the Apache v2 license. It supports of applications on large clusters of commodity hardware. Hadoop was derived from Google's MapReduce and Google File System (GFS) papers.

The name "Hadoop" was given by Doug Cutting's, he named it after his son's toy elephant. Doug used the name for his open source project because it was easy to pronounce and to Google.The Hadoop framework transparently provides both reliability and data motion to applications. Hadoop implements a computational paradigm named MapReduce, where the is divided into many small of work, each of which may be executed or re-executed on any node in the cluster. It provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both reduce and the distributed file system are designed so that node failures are automatically handled by the framework. It enables applications to work with thousands of computation-independent computers and petabytes of data. The entire Apache Hadoop platform is commonly considered to consist of the Hadoop kernel, MapReduce and Hadoop Distributed File System (HDFS), and number of related projects including Apache Hive, Apache HBase, Apache Pig, Zookeeper etc.

Before you start proceeding with this hadoop, you should have prior exposure to Core Java, database concepts, and any of the Linux operating system flavors.

Monday 17 October 2016

BIG DATA

Big data analytics is the process of examining large data sets containing a variety of data types -- i.e., big data -- to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits.

90% of the world’s data was generated in the last few years.

What is Big Data?

Big data means really a big data, it is a collection of large datasets that cannot be processed using traditional computing techniques. Big data is not merely a data, rather it has become a complete subject, which involves various tools, technqiues and frameworks.


What Comes Under Big Data?

Big data involves the data produced by different devices and applications. Given below are some of the fields that come under the umbrella of Big Data.
  • Black Box Data : It is a component of helicopter, airplanes, and jets, etc. It captures voices of the flight crew, recordings of microphones and earphones, and the performance information of the aircraft.
  • Social Media Data : Social media such as Facebook and Twitter hold information and the views posted by millions of people across the globe.
  • Stock Exchange Data : The stock exchange data holds information about the ‘buy’ and ‘sell’ decisions made on a share of different companies made by the customers.
  • Power Grid Data : The power grid data holds information consumed by a particular node with respect to a base station.
  • Transport Data : Transport data includes model, capacity, distance and availability of a vehicle.
  • Search Engine Data : Search engines retrieve lots of data from different databases.
  • Benefits of Big Data

  • Using the information kept in the social network like Facebook, the marketing agencies are learning about the response for their campaigns, promotions, and other advertising mediums.
  • Using the information in the social media like preferences and product perception of their consumers, product companies and retail organizations are planning their production.
  • Using the data regarding the previous medical history of patients, hospitals are providing better and quick service.

  • Big Data Challenges

  • The major challenges associated with big data are as follows:
  • Capturing data
  • Curation
  • Storage
  • Searching
  • Sharing
  • Transfer
  • Analysis
  • Presentation

Saturday 8 October 2016


final:

final is a keyword. The variable decleared as final should be
initialized only once and cannot be changed. Java classes
declared as final cannot be extended. Methods declared as final
cannot be overridden.


finally:

finally is a block. The finally block always executes when the
try block exits. This ensures that the finally block is executed
even if an unexpected exception occurs. But finally is useful for
more than just exception handling - it allows the programmer to
avoid having cleanup code accidentally bypassed by a return,
continue, or break. Putting cleanup code in a finally block is
always a good practice, even when no exceptions are anticipated.


finalize:

finalize is a method. Before an object is garbage collected, the
runtime system calls its finalize() method. You can write system
resources release code in finalize() method before getting garbage
collected.

Thursday 1 September 2016

package wordcount;
       
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;

public class WordCount
{
  public static class Map extends MapReduceBase implements
            Mapper<LongWritable, Text, Text, IntWritable>
      {

 public void map(LongWritable key, Text value, OutputCollector<Text,  IntWritable> output, Reporter reporter)
            throws IOException
        {
            String line = value.toString();
            StringTokenizer tokenizer = new StringTokenizer(line);

            while (tokenizer.hasMoreTokens())
           { 
                value.set(tokenizer.nextToken());
                output.collect(value, new IntWritable(1));
           }

      }
    }

    public static class Reduce extends MapReduceBase implements
            Reducer<Text, IntWritable, Text, IntWritable>
      {
        public void reduce(Text key, Iterator<IntWritable> values,
                OutputCollector<Text, IntWritable> output, Reporter reporter)
                throws IOException
        {
               int sum = 0;
               while (values.hasNext())
           {
               sum += values.next().get();
            }

            output.collect(key, new IntWritable(sum));
       }
    }

    public static void main(String[] args) throws Exception

 {     JobConf conf = new JobConf(WordCount.class);                                                conf.setJobName("wordcount");

        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(IntWritable.class);

        conf.setMapperClass(Map.class);
        conf.setReducerClass(Reduce.class);

        conf.setInputFormat(TextInputFormat.class);
          conf.setOutputFormat(TextOutputFormat.class);

        FileInputFormat.setInputPaths(conf, new Path(args[0]));
        FileOutputFormat.setOutputPath(conf, new Path(args[1]));

       JobClient.runJob(conf); 

    }
}




Tuesday 16 August 2016

                        <<<<<<COMMANDS>>>>>>>>>




hadoop fs ls:

The hadoop ls command is used to list out the directories and files. An example is shown below:


$./hadoop fs -ls input/
Found 1 items
drwxr-xr-x   - hadoop hadoop  0 2013-09-10 09:47 /input/abc.txt

-------------------------------------
hadoop fs lsr:

The hadoop lsr command recursively displays the directories, sub directories and files in the specified directory. The usage example is shown below:

$./hadoop fs -lsr /user/hadoop/dir
Found 2 items
drwxr-xr-x   - hadoop hadoop  0 2013-09-10 09:47 /user/hadoop/dir/products
-rw-r--r--   2 hadoop hadoop    1971684 2013-09-10 09:47 /user/hadoop/dir/products/products.dat

------------------------------------
hadoop fs cat:

Hadoop cat command is used to print the contents of the file on the terminal. The usage example of hadoop cat command is shown below:
EX:
hadoop fs -cat input/abc.txt

-------------------------------
hadoop fs chmod:

The hadoop chmod command is used to change the permissions of files. The usage is shown below:
SYNTAX:
hadoop fs -chmod  <octal mode> <file or directory name>
EX:
$./hadoop fs -chmod 700 input/abc.txt

--------------------------------------------
hadoop fs chown:

The hadoop chown command is used to change the ownership of files. The usage is shown below:
SYNTAX
hadoop fs -chown <NewOwnerName> <file or directory name>
EX:
$./hadoop fs -chown hadoop input/abc.txt
---------------------------------------
hadoop fs mkdir:

The hadoop mkdir command is for creating directories in the hdfs. You can use the -p option for creating parent directories. This is similar to the unix mkdir command. The usage example is shown below:


$./hadoop fs -mkdir -p input/

The above command creates the input directory in /user/ratul directory.
-------------------------------
hadoop fs copyFromLocal:


The hadoop copyFromLocal command is used to copy a file from the local file system to the hadoop hdfs. The syntax and usage example are shown below:

Syntax:
hadoop fs -copyFromLocal <source> <destination>

Example:

Check the data in local file
> cat sales.txt
2000,iphone
2001, htc

Now copy this file to hdfs

$./hadoop fs -copyFromLocal /home/ratul/sales.txt input/

View the contents of the hdfs file.

$./hadoop fs -cat input/sales.txt
2000,iphone
2001, htc
-----------------------------------
hadoop fs copyToLocal:

The hadoop copyToLocal command is used to copy a file from the hdfs to the local file system. The syntax and usage example is shown below:
SYNTAX
hadoop fs -copyToLocal <source> <destination>
EX:
$./hadoop fs -copyToLocal input/sales.txt /home/ratul/

---------------------------
hadoop fs cp:

The hadoop cp command is for copying the source into the target. The cp command can also be used to copy multiple files into the target. In this case the target should be a directory. The syntax is shown below:
SYNTAX
>hadoop fs -cp <source> <destination>
EX:
$./hadoop fs -cp input/sales.txt new/

----------------------------
hadoop fs put:

Hadoop put command is used to copy multiple sources to the destination system. The syntax for the put command are shown below:

Syntax1: copy single file to hdfs

>./hadoop fs -put home/ratul/abc.txt input/

Syntax2: copy multiple files to hdfs

>./hadoop fs -put home/ratul/abc.txt home/ratul/qwerty.txt /new_folder

---------------------------------
hadoop fs get:

Hadoop get command copies the files from hdfs to the local file system. The syntax of the get command is shown below:
SYNTAX:
hadoop fs -get <source_from_hdfs> <destination_to_local>
EX:
$./hadoop fs -get input/abc.txt /home/ratul/
-------------------------------------
hadoop fs moveFromLocal:

The hadoop moveFromLocal command moves a file from local file system to the hdfs directory. It removes the original source file. The usage example is shown below:
SYNTAX:
hadoop fs -moveFromLocal <source_from_local> <destination_to_hdfs>
EX:
$./hadoop fs -moveFromLocal /home/ratul/abc.txt input/

-------------------------------
hadoop fs mv:

It moves the files from source hdfs to destination hdfs. Hadoop mv command can also be used to move multiple source files into the target directory. The syntax is shown below:
SYNTAX:
hadoop fs -mv <SrcFile> <destinationFile>
EX:
$./hadoop fs -mv input/abc.txt input/a/


----------------------
hadoop fs du:

The du command displays aggregate length of files contained in the directory or the length of a file in case its just a file. The syntax and usage is shown below:

$./hadoop fs -du abc.txt
------------------------------
hadoop fs rm:

Removes the specified list of files and empty directories. An example is shown below:

$./hadoop fs -rm input/file.txt
--------------------------------
hadoop fs -rmr:

Recursively deletes the files and sub directories. The usage of rmr is shown below:
$./hadoop fs -rmr input/folder/
-------------------------------------
hadoop fs setrep:

Hadoop setrep is used to change the replication factor of a file.

Example:
$./hadoop fs -setrep - 3 /input/abc.txt


---------------------------------
hadoop fs stat:

Hadoop stat returns the stats information on a path. The syntax of stat is shown below:
EX:
$./hadoop fs -stat /input/abc.txt
2013-09-24 07:53:04
----------------------------
hadoop fs tail:

Hadoop tail command prints the last 10 lines of the file. 
$./hafoop fs -tail /user/hadoop/abc.txt

12345 abc
2456 xyz
---------------------
hadoop fs text:

The hadoop text command displays the source file in text format. The syntax is shown below:
SYNTAX:
hadoop fs -text <src>
EX:
$./hadoop fs -text input/abc.txt

----------------------------------------
hadoop fs touchz:

The hadoop touchz command creates a zero byte file. This is similar to the touch command in unix. The syntax is shown below:
SYNTAX:
$./hadoop fs -touchz /input/aaa.txt

Wednesday 13 July 2016


class Person < ApplicationRecord
  validates :name, presence: true
end

 OR

class Person < ApplicationRecord
  validates :name, :login, :email, presence: true
end

class Person < ApplicationRecord
  validates :terms_of_service, acceptance: true
end

class Person < ApplicationRecord
  validates :email, confirmation: true
end

class Product < ApplicationRecord
  validates :legacy_code, format: { with: /\A[a-zA-Z]+\z/,
    message: "only allows letters" }
OR a-z,A-Z,0-9
end

class Person < ApplicationRecord
  validates :name, length: { minimum: 2 }
  validates :bio, length: { maximum: 500 }
  validates :password, length: { in: 6..20 }
  validates :registration_number, length: { is: 6 }
end

class Player < ApplicationRecord
  validates :points, numericality: true
end

class Account < ApplicationRecord
  validates :email, uniqueness: true
end

Monday 4 July 2016

TAGS:

ERB tags                   <%    %>
print ERB tags          <%=  %>
print ERB comment  <%# %>
if block                      <% if %>...<% end %>
if / else                       <% if %>...<% else %>...<% end %>
else tag     else           <% else %>
elsif tag     elsif          <% elsif %>
end block     end        <% end %>
link_to helper             <%= link_to ..., ... %>
form_for helper     form      <%= form_for(@) do %>

Helpers:

   Form Component     Output Code Snippet

   f.submit                       <%= f.submit "Submit"  %>
   f.password_field          <%= f.password_field :attribute %>
   f.text_area                   <%= f.text_area :attribute %>
   f.check_box                 <%= f.check_box :attribute %>
   f.label                          <%= f.label :attribute, "Attribute" %>
   f.text_field                   <%= f.text_field :attribute %>
   f.file_field                    <%= f.file_field :attribute %>
   f.hidden_field              <%= f.hidden_field :attribute %>

Monday 27 June 2016

<html>
       
<body>
<!----heading-->
<h1> GEEK</h1>
<h1>Geek Languages</h1>
<hr />
<h2>Geek Languages</h2>
<hr />
<h3>Geek Languages</h3>
<h4>Geek Languages</h4>
<h5>This is a Heading 5</h5>
<h6>Geek Languages</h6>
<!---paragraph-->
<p>Geek Languages</p>

<p>
My Bonnie lies over the ocean.
My Bonnie lies over the sea.
My Bonnie lies over the ocean.
Oh, bring back my Bonnie to me.
</p>

<!---tags-->
<p><b>This text is bold</b></p>
<p><strong>This text is strong</strong></p>
<p><big>This text is big</big></p>
<p><em>This text is emphasized</em></p> <br />
<p><i>This text is italic</i></p><br />
<p><small>This text is small</small></p>
<p>This is<sub> subscript</sub> and <sup>superscript</sup>
</p>
<p>
a dozen is
<del>twenty</del>
<ins>twelve</ins>
pieces
</p>

<!--pre fromateed text--->
<pre>
This is
preformatted text.
It preserves
both spaces
and line breaks and shows the text in a monospace font.
</pre>

<!-- anchor tag--->
<a href="http://www.geeklanguages.blogspot.com">This is a link to my Web site.</a>

<!-- IMAGE TAG-->
<img src="aa.png" width="104" height="142" /> <br/>
<!-- ABBREVATION-->
<abbr title="United Nations">UN</abbr>
<br />
<acronym title="World Wide Web">WWW</acronym>


<!---table-->
<h4>A background image:</h4>
<table border= "3" background="aa.png">
<tr>
<td bgcolor = blue>First</td>
<td>Row</td>
</tr>
<tr>
<td>Second</td>
<td>Row</td>
</tr>
<tr>
</table>

<!---table with spans-->

<h4>Cell that spans two columns:</h4>
<table border="1">
<tr>
<th>Name</th>
<th colspan="2">Telephone</th>
</tr>
<tr>
<td>Bill Gates</td>
<td>555 77 854</td>
<td>555 77 855</td>
</tr>
</table>

<!---table with cell padding-->
<h4>With cellpadding:</h4>
<table border="1" cellpadding="10">
<tr>
<td>First</td>
<td>Row</td>
</tr>
<tr>
<td>Second</td>
<td>Row</td>
</tr>
</table>

<div>
<table border = "2">
<th align="left">Money spent on....</th>
<th align="right">January</th>
<th align="right">February</th>
</tr>
</table>
</div>
<!--text color-->
<p style="color:grey" width = "150">
Color set by using color name
</p>



<!--Ordered (Numbered) List-->
<ol>
<li>First item</li>
<li>Next item</li>
</ol>

<!---  Definition List -->
<dl>
<dt>First term</dt>
<dd>Definition</dd>
<dt>Next term</dt>
<dd>Definition</dd>
</dl>


<!--Unordered list--->
<h4>A nested List:</h4>
<ul>
<li>Coffee</li>
  <li>Tea
   <ul>
<li>Black tea</li>
<li>Green tea
      <ul>
<li>China</li>
<li>Africa</li>
      </ul>
   </li>
</ul>
</li>
<li>Milk</li>
</ul>
</body>


</html>

<!---USE OF FRAMES--->
<html>
<frameset rows="50%,50%">
<frame src="frame_a.htm">
<frameset cols="25%,75%">
<frame src="frame_b.htm">
<frame src="frame_c.htm">
</frameset>
</frameset>
</html>
Ruby was created by Yukihiro Matsumoto, or "Matz", in Japan in the mid 1990's. It was designed for programmer productivity with the idea that programming should be fun for programmers. It emphasizes the necessity for software to be understood by humans first and computers second.
Ruby continues to gain popularity for its use in web application development. The Ruby on Rails framework, built with the Ruby language by David Heinemeier Hansson, introduced many people to the joys of programming in Ruby. Ruby has a vibrant community that is supportive for beginners and enthusiastic about producing high-quality code.

Ruby is "A Programmer's Best Friend".


Features of Ruby

  • Ruby is an open-source and is freely available on the Web.
  • Ruby is a general-purpose, interpreted programming language.
  • Ruby is a true object-oriented programming language.
  • Ruby is a server-side scripting language similar to Python and PERL.
  • Ruby can be embedded into Hypertext Markup Language (HTML).
  • Ruby has a clean and easy syntax that allows a new developer to learn Ruby very quickly and easily.
  • Ruby has similar syntax to that of many programming languages such as C++ and Perl.
  • Ruby is very much scalable and big programs written in Ruby are easily maintainable.
  • Ruby can be used for developing Internet applications.
  • Ruby can be installed in Linux,Windows environments.
  • Ruby can easily be connected to Sqlite, MySQL, Oracle.
  • Ruby has a rich set of built-in functions, which can be used directly into Ruby scripts.


#class with rescue and raise in ruby program

class Question
attr_accessor  :a1, :a2
def initialize(a1,a2)
@a1=rand(50)
@a2=rand(50)

end
def make_random_substraction
begin
puts 'Random Substraction of two numbers.'

puts "What is '#@a1' - '#@a2' ?"


@a3 = Integer(@a1) - Integer(@a2)
puts "#@a3"

if @a3<0
raise 'exception'
end

rescue
puts "An exception : answer is in negative value."
puts "Lets retry it in other order."
@a3 = Integer(@a2) - Integer(@a1)

puts "Here, answer is : #@a3"
 
end
end
end
obj=Question.new(@a1,@a2)
obj.make_random_substraction


OUTPUT:

Random Substraction of two numbers.
What is '18' - '1' ?
17

Friday 24 June 2016

Installation of ruby

1.tar your ruby-2.3.1.tar.gz in root
2.go to ruby-2.3.1 folder in root
3 than run these commands
>>  ./configure
>> make
>> make install

Installation of rvm

1.tar rvm-1.27.0.tar.gz
2.for set the path ,write this following command
 >> sudo gedit .bashrc
paste this line in your .bashrc file
 export PATH=$PATH:/home/ratul/ruby/rvm-1.27.0/bin
3.rvm automount
 it will ask for give the name, mention any name
4.rvm use --default
5. rvm list

Installation of Rubygem

1.unzip your rubygem-2.6.4.zip
2.move into root by
>>sudo su
3.go to rubygem directory
4. run command
>>ls
5. you can see the file setup.rb
6. now run
>>ruby setup.rb


FOR FURTHER QUERY
Contact On  :  +919814040990
# use of class in ruby program

class Abc
attr_accessor :a1, :ans
def initialize (a1,ans)
@a1 = a1
@ans = ans



def ask
   puts "Question #{a1}"
a = gets.chomp


if a == "#@ans"
puts "correct"
else
puts "wrong, answer was 8"
end
end
end
end
a=Abc.new("what is 3 + 5","8")
a.ask

OUTPUT:

Question what is 3 + 5
4
wrong, answer was 8

#display An Array

the_count = [1, 2, 3, 4, 5]
fruits = ['apples', 'oranges', 'pears', 'apricots']
change = [1, 'pennies', 2, 'dimes', 3, 'quarters']



#OR
for number in the_count
  puts "This is count #{number}"
end
#OR

the_count.each do |number|
puts "no. are   #{number}"
end
#OR
fruits.each do |fruit|
  puts "A fruit of type: #{fruit}"
end
#OR
change.each {|i| puts "I got #{i}" }

# OR
fruits.each {|i| puts " list of ::#{i}" }

OUTPUT:

This is count 1
This is count 2
This is count 3
This is count 4
This is count 5
no. are   1
no. are   2
no. are   3
no. are   4
no. are   5
A fruit of type: apples
A fruit of type: oranges
A fruit of type: pears
A fruit of type: apricots
I got 1
I got pennies
I got 2
I got dimes
I got 3
I got quarters
 list of ::apples
 list of ::oranges
 list of ::pears
 list of ::apricots

# random index value of array in ruby program

def que_ans

q=['What is 8+6?','What is 5*4?','What is 7-4?','What is 3*5?','What is 9/3?']
a=['14','20','3','15','3']

q_len = q.length
a_len = a.length


@que = rand(q_len)

puts "Que:" + q[@que]
c = gets.chomp
if c == a[@que] then
puts "Correct."
else
puts "Wrong! Answer is:" + a[@que]
end

end

puts "Randomly questions from Array:"
for i in  0..4
que_ans
end

OUTPUT:

Randomly questions from Array:
Que:What is 3*5?
3
Wrong! Answer is:15
Que:What is 3*5?
15
Correct.
Que:What is 7-4?
3
Correct.
Que:What is 5*4?
2
Wrong! Answer is:20
Que:What is 8+6?
14
Correct.
#methods with arguments

def ask(question,answer)
   
      i = 0
puts question
         ans=gets.chomp.to_i
      if ans==answer
             @true+=1
            puts "correct"
       else
             puts "wrong answer"
        end
 

 end
question=['what is 8+6','5*4?','7-4?','3*5','9/3']
answer=[14,20,3,15,3]
@true=0
for i in 0..4
ask(question[i],answer[i])
end
puts "true answer : #@true"


OUTPUT:

what is 8+6
3
wrong answer
5*4?
20
correct
7-4?
3
correct
3*5
15
correct
9/3
3
correct
true answer : 4



#random addition

def random_addition

@arg1=rand(-2..0)
@arg2=rand(98..100)

puts "Random addition of two numbers :"
puts "What is #@arg1 + #@arg2 ?"

@arg3 =@arg1.to_i + Integer(@arg2)
puts "#@arg3"

end

random_addition

OUTPUT:

Random addition of two numbers :
What is 0 + 100 ?
100


program for ruby:

#method#

def addition

@arg1=gets.chomp
@arg2=gets.chomp

puts "addition of two numbers :"
puts "What is  @arg1 + @arg2 ? "

@arg3 =@arg1.to_i - @arg2.to_i
puts "#@arg3"

end
puts "enter 2 nos"
addition

OUTPUT
enter 2 nos
23
3
addition of two numbers :
What is  @arg1 + @arg2 ?
20

Monday 13 June 2016


1. ctrl+alt+t = To open terminal.
2. ctrl+shift+ = To large the terminal.
3. pwd = This command is use to know the present working directory.
4. cd             = To change working directory as "cd Desktop".
5. ls              = To show the content of any directory/file as a list.
6. clear          = To clear the terminal.
7. gcc "filename" = To compile a file.
8. sudo          = Mean "super user do" if any command is not working than use sudo to make it                                       work.
9. su              = Means "switch user" to switch user.
10.cal = To see calender.
11.date         = To see date.
12.ls -a = To show hidden file.
13.man "command" = To get help for any command.
14.who = Who opens the terminal.
15.whoami = To know which user is working.


TO CREATE OR DELETE ANY DIRECTORY OR FILE.

1.mkdir "filenames" = To create a diectory.
2.rmdir "filenames"         = To remove an existing directory.
3.rm -rf "filename"         = If a folder contain many files than to remove a particular file freom the folder.
4.touch "file name"         = To create files with different extensions in a single command.
5.gedit "file name"          = The file is open in terminal and no command will work in terminal..
6.pkill "process name" = To close a process in background.
7.kill "process id"           = To kill a process if process id is known.
8.ls "file name"               = To check either the file is present or not.

9.cat "filename"            = This command is use to show data of a file on terminal.
10.cat > "filename"         = To enter data in existing file.but previous data will be deleted.
11.cat >> "filename" = To enter data in an existing file in append mode.
12.sudo apt-get install vlc = To install something from net.
13.sudo apt-get update = To check updates from net.

14.wc "filename"            = To see how many lines  words  charcter in a file.
15.cp "source" "des" = To copy a file from one place to another.
16.mv "existing" "new" = name of file will get copied from new to existing.
17.ls -l                           = To see permission of all files.
18.head "filename"         = To read first 10 lines of file.
19.tail "filenmae"           = To read last 10 lines of file.
20.more "filename" = to read a file.
21.less "filenmae"          = to read a file.
22.compress file             =  tar cvzf "file.tar.bz" "file"
23.extract file                 = tar xvzf file.tar.bz