Tuesday 8 December 2020



Microsoft Surface Laptop






Microsoft’s new Surface Laptop is a first for the Windows maker: a conventional laptop that runs a stripped-down but more battery-efficient operating system called Windows 10 S. The company is aiming the $999 Surface Laptop at the educational set, but it’s bound to go toe-to-toe with Apple’s entry-level MacBook Air as well. Other hardware makers, meanwhile, will offer Windows 10 S on far cheaper devices that will compete with Google’s Chromebooks. Together, the Surface Laptop and the new version of Windows show Microsoft is willing to mix things up in the notebook world—to consumers’ benefit.

Monday 9 November 2020


The Latest Trends in Computer Science



1 - Quantum computing: Moore’s law is failing[1]. Both physically and economically we are at the end of our silicon rope. We are almost at the physical limit of how many transistors we can cram into a chip. This is why over the last few years it has been all about adding cores and better load distribution rather than architecture. You can do so much with silicon and classical computing. To preserve the long term exponential growth of computing power per dollar cost we need a paradigm shift. That paradigm shift is coming from quantum computing. However using qubits instead of bits has its own limits. Quantum computers are very good at somethings and very very inept at others. Over the medium term (10–15 years) most advances will come from designing hybrid systems that combine quantum and classical components. This will require hitherto non existing computer science jobs and topics. If I were 20 I would definitely devote some serious effort to follow quantum computing innovations.


2 - Deep Learning and Natural Language Processing: These two are in fact indispensable for building a system that can apply a general learning ability (we are not there yet) using existing, non-structured (i.e. specifically designed to be input into the system) sources of information (we are halfway there). If we are to move towards general artificial super intelligence we will have to build a system that can do unsupervised learning on multiple domains without explicit interference from programmers. Much effort will be spent to develop learning algorithms that are not designed for specific problems but general information processing.

3 - Data visualization: This will sound strange to most but in my opinion existing data visualization practices are well behind our ability to make inferences from data. Many good routines exist but many are still very labor intensive. We are yet to develop an intelligent interface between analytical inference and visual representation of that inference. I am not an expert on this but in the next 10 years we might witness dramatic changes in how we translate data analysis results to end users.

4 - Autonomous systems: Self driving cars, homeostatic control systems for your house, health monitoring implants, robotic asteroid miners, self-replicating robots for space exploration and the like. Internet of things is here but it has still not exploded. A lot of room for rapid development is on the horizon. People who can design efficient and seamless communication and coordination between the “things” in the internet of things will be high in demand.

5 - Neural interfaces between digital and biological systems: As Elon Musk says the main bottleneck in brain - computer communication is human output. Our input system (visual cortex) is incredibly high bandwidth. We can absorb immense amounts of information through our visual (and auditory) systems. The problem is output (or input to the digital system). We have a very precise but very inefficient meat stick method (i.e. typing commands) and a much faster but more difficult to process vocal system (speech recognition). Speech recognition has reached impressive levels but we are still way short to match our own input bandwidth with our output bandwidth. In order to match our visual information processing capacity we need direct neural interfaces with computers. Systems that can interpret neural signals as information output. Some rudimentary success have been reported over the last few years but real development is still ahead. I think we can probably think many more future topics of computer science. If we go into details I think we will see some interdisciplinary cross-breeding between medicine and computer science (very much similar to what happened in genomics and bioinformatics), and probably many more that few people can predict.

                                                                                                      Posted By
                                                                                                      P.Revathi,M.Sc.,M.Phil.,
                                                                                                      Department of Computer Science
                                                                                                      MKJC, Vaniyambadi.


 


Thursday 15 October 2020

What Is Data Mining? 

Data mining refers to extracting or mining knowledge from large amounts of data. The term is actually a misnomer. Thus, data mining should have been more appropriately named as knowledge mining which emphasis on mining from large amounts of data. It is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. The key properties of data mining are Automatic discovery of patterns Prediction of likely outcomes Creation of actionable information Focus on large datasets and databases

Thursday 10 September 2020

 

The distributed cloud

Distributed cloud refers to the distribution of public cloud services to locations outside the cloud provider’s physical data centers, but which are still controlled by the provider. In distributed cloud, the cloud provider is responsible for all aspects of cloud service architecture, delivery, operations, governance and updates. The evolution from centralized public cloud to distributed public cloud ushers in a new era of cloud computing

Distributed cloud allows data centers to be located anywhere. This solves both technical issues like latency and also regulatory challenges like data sovereignty. It also offers the benefits of a public cloud service alongside the benefits of a private, local cloud. 

Monday 3 August 2020

 

Human augmentation

Human augmentation is the use of technology to enhance a person’s cognitive and physical experiences.

Physical augmentation changes an inherent physical capability by implanting or hosting a technology within or on the body. For example, the automotive or mining industries use wearables to improve worker safety. In other industries, such as retail and travel, wearables are used to increase worker productivity. 

Physical augmentation falls into four main categories: Sensory augmentation (hearing, vision, perception), appendage and biological function augmentation (exoskeletons, prosthetics), brain augmentation (implants to treat seizures) and genetic augmentation (somatic gene and cell therapy). 

AI and ML are increasingly used to make decisions in place of humans

Thursday 9 July 2020

                                             Hyperautomation

Automation uses technology to automate tasks that once required humans.

Hyperautomation deals with the application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyperautomation extends across a range of tools that can be automated, but also refers to the sophistication of the automation (i.e., discover, analyze, design, automate, measure, monitor, reassess.)

"Hyperautomation often results in the creation of a digital twin of the organization"

 

Wednesday 24 June 2020

Java8

 

Java 8 

Java isn't a new language. It's often everyone's first language, thanks to its role as the lingua franca for AP Computer Science. There are billions of JAR files floating around running the world.

But Java 8 is a bit different. It comes with new features aimed at offering functional techniques that can unlock the parallelism in your code. You don't have to use them. You could stick with all the old Java because it still works. But if you don't use it, you'll be missing the chance to offer the Java virtual machine (JVM) even more structure for optimizing the execution. You'll miss the chance to think functionally and write cleaner, faster, and less buggy code.

Highlights: Lambda expressions and concurrent code

Wednesday 3 June 2020

Blog Creation FDP


BLOG CREATION



Dear all,

       Greetings to one and all present here! I am extremely privileged and happy to welcome you all to this Faculty Development Program on “Blog Creation” on 03rd June 2020 organized by Blog Committee, Marudhar Kesari Jain College for Women, Vaniyambadi, Thirupattur˗ 635751.






Kindly attend the Online Quiz on Blog Creation.




Friday 29 May 2020

JAVA8

 

Java 8

Java isn't a new language. It's often everyone's first language, thanks to its role as the lingua franca for AP Computer Science. There are billions of JAR files floating around running the world.

But Java 8 is a bit different. It comes with new features aimed at offering functional techniques that can unlock the parallelism in your code. You don't have to use them. You could stick with all the old Java because it still works. But if you don't use it, you'll be missing the chance to offer the Java virtual machine (JVM) even more structure for optimizing the execution. You'll miss the chance to think functionally and write cleaner, faster, and less buggy code.

Highlights: Lambda expressions and concurrent code

Thursday 28 May 2020

Blog Creation



PROGRAMMING LANGUAGES IN 2020





Crystal
With a syntax resembling that of Ruby, Crystal is extremely easy to read and write. The statistically type-checked configuration helps in detecting typing errors as early as possible rather than receiving a failure on runtime.
Dart by Google
         Made by Google, Dart looks impressive in the race among emerging programming languages. With an optimized UI, the programming language consists of a mature and complete async-await for user interfaces containing event-driven code.

Elixir
            Elixir runs on the Erlang virtual machine and is created to maintain applications. With minimalistic coding style, Elixir allows a user to note down codes in a short and concise way. To increase productivity, Elixir lets the language be extended in different domains. Furthermore, Elixir is also used for web development.

Elm
Elm is quick to generate fast JavaScript codes. Elm comes with a typed interference, which helps in detecting API changes automatically. Other notable feature of Elm includes a helpful compiler and no runtime exceptions and is considered a delightful language for data visualization.
Kotlin
            Kotlin earns a brownie point by being a programming language for Android, cross-platform and web development. The language is efficient in reducing boilerplate code and plays safe by restricting null pointers exceptions.

Rust
Sponsored by Mozilla Research, the design of Rust is similar to that of C++. However, the programming language has been made to handle memory safety, along with keeping a top-notch performance.
Haskell
Named after logician Haskell Curry, the programming language is statistically typed and does not require the writing up of every type in the program. Most types are bidirectionally unified or are written by the compiler for certain documents. The flagship compiler GHC comes with a parallel garbage collector and contains several concurrency and primitives. The program has been designed to compose together, with the ability to write control constructs by noting down ordinary functions.
Clojure
The programming language uses runtime polymorphism that is easy to extend in nature. Clojure emphasizes recursive iteration instead of side-effect based looping and is a hosted language that shares a JVM type system and GC.

Julia
Julia provides good support for interactive use due to its dynamically-typed configuration and has a rich language of descriptive data types. With a high-level syntax, Julia can be used by anyone from any background or experience level. Furthermore, it comes with several other features such as asynchronous IO, debugging and a package manager. 
Scala


A unique feature of Scala includes the mixing up of multiple traits into class to combine their interface and behaviour. Along with these, Scala houses a concise syntax with anonymous functions. 

Tuesday 12 May 2020

Java 8

 

Java 8

Java isn't a new language. It's often everyone's first language, thanks to its role as the lingua franca for AP Computer Science. There are billions of JAR files floating around running the world.

But Java 8  is a bit different. It comes with new features aimed at offering functional techniques that can unlock the parallelism in your code. You don't have to use them. You could stick with all the old Java because it still works. But if you don't use it, you'll be missing the chance to offer the Java virtual machine (JVM) even more structure for optimizing the execution. You'll miss the chance to think functionally and write cleaner, faster, and less buggy code.

Highlights: Lambda expressions and concurrent code

Wednesday 15 April 2020



MACHINE LEARNING WITH AI

Difference between Artificial intelligence and Machine learning ...
  • Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without being explicitly programmed where to look or what to conclude.
  • A neural network is a kind of machine learning inspired by the workings of the human brain. It’s a computing system made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
  • Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
  • Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
  • Natural language processing is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.

Wednesday 8 April 2020

JAVA8

 

Java 8

Java isn't a new language. It's often everyone's first language, thanks to its role as the lingua franca for AP Computer Science. There are billions of JAR files floating around running the world.

But Java 8 is a bit different. It comes with new features aimed at offering functional techniques that can unlock the parallelism in your code. You don't have to use them. You could stick with all the old Java because it still works. But if you don't use it, you'll be missing the chance to offer the Java virtual machine (JVM) even more structure for optimizing the execution. You'll miss the chance to think functionally and write cleaner, faster, and less buggy code.

Highlights: Lambda expressions and concurrent code

Tuesday 3 March 2020




Artificial intelligence (AI)
Artificial intelligence (AI) has been a long discussed topic ever since programmable computers were developed. Academics and philosophers questioned the differences between man and machine. Could we program the human brain with all of its’ intricacies into a computer? Will a computer then be able to think?
To date, we still have yet to answer these interesting, mind-numbing questions, but we have come closer to making computers smarter. Though, some may argue, even the smartest computers still have less intelligence than a cockroach. Think about that for a little bit.
The smartest computers still can’t do a bunch of tasks at once. Instead, they are very good at doing the one task they are programmed to do.
Before we dig any further, let’s define some key terms. We chose one of many definitions that are available online.

The first three are hierarchical; AI is the largest, overarching category. Machine Learning (ML) is a subset of AI and Deep Learning (DL) a subset of ML.
Artificial intelligence — A computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Machine learning — Arthur Samuel said “Machine Learning is the ability to learn without being explicitly programmed.”
Deep learning — From MIT News: Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. This is similar to synaptic connections of axons and dendrites.
Image recognition — Using machine and deep learning techniques to identify contents within an image.
Architecture — The scaffolding and blueprints of the algorithm model used to predict an outcome.
These words will come in handy for the next couple of articles as well! Do some google searches on them and keep an eye out for them when we introduce common machine learning algorithms in the next article.

Why now?
As stated earlier, machine learning and artificial concepts are not new. In fact, they are decades old. However, several factors have recently changed that have now significantly contributed to advances in these fields. These are important to remember because this marks a unique point in history.
First, the computational speed of technology is rapidly advancing. Hardware that goes by the name of GPUs (graphical processing units) have allowed computations to be parallelized. That is, more calculations can be done together at the same time instead of one after the other. This allows for huge gain in efficiency. We can thank the video gaming industry for these advances.
Second, there have been significant advances in algorithms. The deep learning frameworks or architectures have improved from the likes of Google, Facebook, the research community, and emerging individuals in the open-source communities. For example, a class of algorithms that are widely used today fall into neural networks. These algorithms are loosely modeled after the brain where information is passed between different layers of neurons through the network. Over time, the algorithms became more complex, ranging from a few layers to tens to potentially hundreds of layers. This added complexity allows for interesting interactions between variables that we may or may not have thought as important.
Last, but not least, is the exponential increase in data available within industries, the web, and businesses. This area has developed for years and will continue to. Whether it be the influx of social data, the number of images on the internet, or your purchases on Amazon, data is ever-present and will continue to serve as the starting point for many of these machine learning algorithms.

All of this may sound overwhelming, and the scope is certainly enormous. Each of these variables which contribute to the emergence of machine learning and artificial intelligence can be teased out in further detail, sure. However, what matters is to understand that these pieces were the ingredients to adoption of algorithms to look through the data to find stories.
The story we are talking about through this series is the story of how machine learning can change the way radiologists do their work. However, this change will take time, understanding and communication.

With all of this jargon, it is easy to get discouraged. Don’t worry. We are all on this journey to learn more about how technology and the current state of radiology (and other fields) will change.
We recommend reading through these definitions several times, maybe do some outside research, and to definitely stick with us as we dig deeper into these concepts. In summary: Deep learning is a subset of ML. ML is a subset of AI.

Monday 24 February 2020






Department of Computer Science and Computer Applications Conducted a Power Seminar on Artificial Intelligence and Machine Learning on February 22nd at MKJC Campus.


Within the span of 10 short years, or perhaps even less, service apps like UberLyftDoorDashAirBnB and others have spawned millions of users, and can be found on almost everyone’s smart phone. Personal assistants like Siri and Alexa have entered many of our lives. It would be terribly naive for anyone to say that the world hasn’t changed in the last 10 years. This technology growth and change is likely to continue for the next decade and beyond.

Thursday 23 January 2020


Programming Languages
1.     JavaScript
It’s incomprehensible to be a computer program engineer these days without utilizing JavaScript in a few ways. 2019 Designer Overview, JavaScript is the foremost prevalent language among engineers for the seventh year in a push. About 70 percent of study respondents detailed that they had utilized JavaScript within the past year. Although JavaScript is primarily a front-end language run on the browser, it can also be used on the server-side through Node.js to build scalable network applications. Node.js is compatible with Linux, SunOS, Mac OS X and Windows.
2.     Swift
First announced by Apple in 2014, Swift is a relatively new programming language used to develop iOS and macOS applications.
Swift has been optimized for performance and built from the ground up to match the realities of modern iOS development. Not only does iOS run on every iPhone and iPad, but it’s also the basis for other operating systems such as watchOS (for Apple Watches) and tvOS (for Apple TVs). In addition, Apple isn't going anywhere as a tech industry leader, and iOS apps continue to be the most profitable in the mobile app marketplace.