Tuesday, 20 July 2021

 PG and Research Department of Computer Science and Computer Applications organizes Virtual Power Seminar on “Artificial  Intelligence  & Machine Learning”

 



Friday, 11 June 2021

 PG & Research Department of Computer Science and PG Department Of Computer Applications is jointly organizing One Day National Level Webinar on " AWS CLOUD” ” on 14.06.2021.




Friday, 30 April 2021

 

                        IT In Space

If space has always been an enigma for mankind, then the moon has always served as the first post for any attempt at understanding or exploring deeper space.

All ventures into outer space, ranging from exploratory fly-bys to managed flights, have first been tried out of the moon. Like in most other areas, space research is also moving into larger-scale simulations using powerful computers . in fact given the high cost , and often the impracticability of conducting live experiments, space research had moved into computer-based simulation long before most other streams.

Everything from flight paths of futer rockets to the theories on the origin of the universe and its evolution are today computer simulated.consider the case of the magnetic file around a planet. Like with everything else in space research . let take the moon as our example the moon’s magnetic field is very feeble when compared to that of the earth. Also unlike on the earth, it varies widely from point to point. This much is known from the measurements taken by spacecraft that flew by or landed on the moon.

THE INTERNET ON MARS

The internet is slated to go over and above this world, the first target being mars, to be followed by Jupiter and its moon, europa.

This idea of talking the internet to the space comes from the need for a low cast and high reliability inter planetary network. It is not that there was no communication earlier. when Countries started sending probes into the space,each used a unique set of protocols to communicate with the earth. This was done using the deep space network(DSN) developwd by NASA. Since these probes communicated with same ground stayions, the need for common protocol increased with time. Taking the internet to space is the offshoot of this need for standardization. The inter planetary network(IPN),a part of jet Propulsion Laboratory(JPL),is managing this program.


But how will this be implemented? One can plan how the internet will work on the earth because of its fixed size and the fixed positions on which the data has to travel. Now, for the implementation on the planets will be connected through individual dedicated getways. The individual networks can follow their own protocols,but these protocols will end at the gateway. By keeping the internets of all the planets separate,engineers will not have to make long service calls. Besides they will not have to send a database of 20-million dotcom names to mars periodically

These gateways will work on a bundle based protocol, which will reside over the transport layer to carry data from one gateway to another. This gateway may not be on the surface of a planetary body;it can be a spacecraft in orbit,too.At at the moment a bundle protocol will be needed because the data will need to travel huge distances,and sending small packets of data may not be feasible. Instead, this data will be collected and sent in a bundle, as a big burst of data,to the next gateway.

Thursday, 11 March 2021

 

Interconnection of Computer Networks


    With the development of individual computer networks comes the need to interconnect them. Network designers are faced with heterogeneity of networks just they were previously faced with heterogeneity of computers within a single network.This paper shows that similar structuring techniques,namely multiplexing, switching, cascading, wrapping and layering, can be applied, and that a set of simple principles can be derived which facilitate greatly the design of the interconnection of computer networks.

    These simple principles are applied to the analysis of some typical examples of network interconnection problems, in areas of addressing, routing, non-equivalent communication services, error control,flow control and terminal access.Similar principles could be applied to some unresolved issues in computer network interconnection, such as congestion control or administrative functions. It is finally claimed that final objective of network interconnection studies are to determine the set of international standards which are required to make network interconnection straightforward in the near future.

    Data processing is gradually evolving from its original model,to networking and distributed processing.Computers have been linked into individual networks to satisfy needs of individual organizations. Now, networks must be interconnected to cater to inter-organizational relationships.Even though this requirement for interconnection of computer networks was identified early, it is only recently that the problem has been widely recognized.

    A set of simple rules can tremendously help to analyse specific interconnection problems, as well as improve potential interconnectability of a network through proper design choices.First question to be raised is "What is specific network interconnection, as opposed to building single network ?".Basically, an interconnected set of networks can be considered from an external (user's) point of view and from an internal (designer's) point of view.From a user's viewpoint, an interconnected set of networks is not different from a single network.


    In particular, two identical networks can usually be integrated into a single bigger one. In addition, it is essential to preserve freedom in the design of future computer networks,but still be able to interconnect them with existing ones. In other words, the question is "how to interconnect heterogeneous networks" rather than "how to build a world wide homogeneous network".Before being faced with the constraint of interconnecting heterogeneous networks, network designers have been faced with the problem of interconnecting heterogeneous computers.

Conclusion

    Interconnection of computer networks is a complex problem and largely still an open question. However, it has been solved satisfactorily in a number of cases, permitting partial interconnection .Experience shows that a set of simple rules can be applied to analyze network interconnection problems. Of course, these simple principles are not sufficient, and practical experience is still essential. It could reasonably be expected that the same type of techniques could be applied to the remaining network interconnection issues, but this still to be tried.

     The final objective of all present studies and experiments in network interconnection should be to determine which common properties networks must exhibit to make them readily interconnectable, and to establish them as international standards.Common levels of services, expandability of network addresses to global addresses, common layering structure, common protocols on top of common services are such candidates for standardization.







Thursday, 11 February 2021

5 Programming Languages in 2021

 

5 Programming Languages in 2021 

Image result for ELM new programming languages 

Elm is becoming popular within the JavaScript community, primarily among those who prefer functional programming, which is on the rise. Like Babel, TypeScript, and Dart, Elm transpiles to JavaScript.

Rust is a systems programming language meant to replace a lot of C and C++ development—which is why it's surprising to see this language's popularity growing the fastest among web developers. It makes a little more sense when you find out that the language was created at Mozilla, who is looking to give web developers that are forced to write low-level code a better option that's more performant than PHP, Ruby, Python, or JavaScript. Rust was also crowned the "most loved" technology in StackOverflow's 2016 developer survey (meaning it had the most users who wanted to keep using it).

Kotlin has been around for about five years, but it finally reached the production-ready version 1.0 this year. Although it hasn't achieved the popularity of Scala, Groovy, or Clojure—the three most popular and mature (non-Java) JVM langauges—it has separated itself from the myriad other JVM languages and seems poised to take its place among the leaders of this group. It originated at JetBrains—makers of the popular IntelliJ IDEA IDE. So you know it was crafted with developer productivity in mind. Another major reason Kotlin has a bright future—you can easily build Android apps with it.

Crystal is another language that hopes to bring C-like performance into the highly abstracted world of web developers. Crystal is aimed at the Ruby community, with a syntax that is similar to and, at times, identical to Ruby's. As the already large number of Ruby-based startups continues to grow, Crystal could play a key role in helping take those applications' performance to the next level.

Elixir also takes a lot of inspiration from the Ruby ecosystem, but instead of trying to bring C-like benefits, it's focused on creating high-availability, low-latency systems—something Rails has had trouble with, according to critics. Elixir achieves these performance boosts by running on the Erlang VM, which has a strong performance reputation built over its 25 years in the telecom industry. The Phoenix application framework for Elixir—more than any piece of this blooming ecosystem—has given this language legs.

 

Monday, 8 February 2021

Counting Elephants from SPACE

 

Marudhar Kesari Jain College For Women, Vaniyambadi
PG & Research Department of Computer Science