Thursday, December 26, 2019

Does Life Exist Elsewhere in the Cosmos

The search for life on other worlds has consumed our imaginations for decades. Humans feed on a constant supply of science fiction stories and movies such as  Star Wars, Star Trek, Close Encounters of the Third Kind, which all cheerfully suggest that they are out there. People find aliens and the possibility of alien life are fascinating topics and wondering if aliens have walked among us is a popular pastime. But, do they really exist out there? Its a good question. How the Search for Life Is Done These days, using advanced technology, scientists may be on the verge of discovering places where life not only exists but may well be thriving. Worlds with life on them may be all over the  Milky Way Galaxy. They could also be in our own solar system, in places that arent exactly like the life-friendly habitats that exist here on Earth. Its not just a search about life, however. Its also about finding places that are hospitable to life in all its many forms. Those forms may be like the life that exists on Earth, or they could be very different. Understanding the conditions in the galaxy that enable the chemicals of life to assemble together in just the right way.   Astronomers have found more than 5,000 exoplanets in the galaxy. These are worlds circling other stars. There are many more candidate worlds to be studied. How do they find them? Space-based telescopes such as the Kepler Space Telescope look for them using specialized instruments. Ground-based observers also look for extrasolar planets using very sensitive instruments attached to some of the worlds largest telescopes.   Once they find worlds, the next step for scientists is to figure out if they are habitable. That means, astronomers ask the question: can this planet support life? On some, conditions for life could be quite good. Some worlds, however, orbit too close to their star, or too far away. The best chances for finding life lie in the so-called habitable zones. These are regions around the parent star where liquid water, which is necessary for life, could exist. Of course, there are many other scientific questions to be answered in the search for life.   How Life is Made Before scientists can understand if life exists on a planet, its important to know how life arises.  One major sticking point in discussions of life elsewhere is the question of how it gets started. Scientists can manufacture cells in a laboratory, so how hard could it really be for life to spring up under the right conditions? The problem is that they are not actually building them from the raw materials. They take already living cells and replicate them. Thats not the same thing at all. There are a couple of facts to remember about creating life on a planet: Its NOT simple to do.  Even if biologists had all the right components, and could put them together under ideal conditions, we cant yet make even one living cell from scratch. It may very well be possible someday, but not now.Scientists dont really know how the first living cells formed. Sure they have some ideas, but no one has replicated the process in a lab.   What they do know are the basic chemical building blocks of life. The elements that formed life on our planet existed in the primordial cloud of gas and dust from which the Sun and planets arose. That would include the carbons, hydrocarbons, molecules, and other pieces and parts that make up life. The next big question is how it all came together on early Earth to form the first one-celled life forms. Theres not a complete answer to that one, yet. Scientists know conditions on early Earth were conducive to life: the right mix of elements was there. It was just a matter of time and mixing before the earliest one-celled animals came about. But, what was it that spurred all the right things in the right place to form life? Still unanswered. Yet, life on Earth — from the microbes to the humans and plants — is living proof that it is possible for life to form. So, if it happened here, it could happen elsewhere, right? In the vastness of the galaxy, there ​should exist another world with conditions for life to exist and upon that tiny orb life would have sprung up. Right? Probably. But no one knows for sure yet. How Rare is Life in our Galaxy? Given that the galaxy (and universe) for that matter, is rich with the basic elements that went into creating life, its very likely that yes, there are planets with life on them. Sure, some birth clouds are going to have slightly different mixes of elements, but in the main, if were looking for carbon-based life, theres a good chance its out there. Science fiction likes to talk about silicon-based life, and other forms not familiar to humans. Nothing rules that out. But, theres no convincing data showing the existence of any life out there. Not yet. Attempting to estimate the number of life forms in our galaxy is a bit like guessing the number of words in a book, without being told which book. Since there is a large disparity between, for instance, Goodnight Moon and Ulysses, it is safe to say that the person doing the guessing doesnt have enough information. That may seem a bit depressing, and its not the answer everybody wants. After all, humans LOVE science fiction universes where other life forms are teeming out there. Chances are, there is life out there. But, just not enough proof. And, that raises the question, if there IS life, how much of it is part of an advanced civilization? Thats important to think about because life could be as simple as a microbial population in an alien sea, or it could be a full-blown space-faring civilization. Or somewhere in between.   However, that doesnt mean there isnt any. And, scientists have devised thought experiments to figure out how many worlds might have life in the galaxy. Or the universe. From those experiments, theyve come up with a mathematical expression to give an idea about how rare (or not) other civilizations may be. Its called the Drake Equation and looks like this: N R*  Ã‚ · fp  · ne  Ã‚ ·fl  Ã‚ ·fi    · fc  Ã‚ · L. where N is the number you get if you multiply the following factors together: the average rate of star formation, the fraction of stars that have planets, the average number of planets that can support life, the fraction of those worlds that actually develop life, the fraction of those that have intelligent life, the fraction of civilizations that have communications technologies to make their presence known, and the length of time that theyve been releasing them.   Scientists plug numbers in for all these variables and come up with different answers depending on what numbers are used. It turns out there could be just ONE planet (ours) with life, or there could be tens of thousands of possible civilizations out there.   We Just Dont Know — Yet! So, where does this leave humans with an interest in life elsewhere? With a very simple, yet unsatisfying conclusion. Could life exist elsewhere in our galaxy? Absolutely. Are scientists certain of it? Not even close. Unfortunately, until humanity actually makes contact with a people not of this world, or at least begin to fully understand how life came to exist on this tiny blue rock, the questions about life elsewhere arent going to be answered. Its most likely that scientists will find evidence of life in our own solar system first, beyond Earth. But, that search requires more missions to other places, such as Mars, Europa, and Enceladus. That discovery may come about much faster than the discovery of life on worlds around other stars.   Edited by Carolyn Collins Petersen.

Wednesday, December 18, 2019

The Islamic State Of Iraq And Syria - 1542 Words

East Asia is globalizing at a rapid speed, and due to its large influence on the rest of the world it is important to analyze the progression occurring here. Currently, the war against terrorism is a growing concern and countries around the world have come together to meet for a consensus about the negativity surrounding terrorism, specifically a unification against ISIS. ISIS, which stands for the Islamic State of Iraq and Syria, is a terror organization that has claimed responsibilities for the recent bombings in Paris, Belgium and Pakistan (1). The Obama Administration has shown great attention and focus towards ending ISIS and creating a unification between allied nations. On November 15 2015, President Obama met with the presidents of†¦show more content†¦On a local scale, communities in East Asia are aware and understand the threat of ISIS. Early in 2015, 2 Japanese hostages were captured by ISIS and were later killed by the terror group (3). During this time the Japan ese government worked diligently with international alliances and the local communities to assure that anything that could be done, would be done. Unfortunately, the hostages were killed, but such a scenario presents the global-local nexus of Japan. The Japanese people were obviously alarmed by the situation, and the government provided them with up-to-date notifications of the situation and reassurance that their government was involved. This type of interaction transpired into Japan’s global interactions with President Obama and French president Francois Hollande (3). These nations voiced their opinion against the acts of ISIS and showed a consensus that such an organization needed to be ended. Unfortunately, many East Asian nations besides Japan have been affected by ISIS. In Indonesia there are about seven Islamist extremist groups that acknowledge their actions to ISIS. A recent bloody strike reported on January 14th in the capital’s downtown district left eight d ead, validating the presence of ISIS in East Asia’s door steps. A group in the Philippines recently pledged their allegiance to ISIS in a short clip. A Starbucks in Thailand was affected by two suicide bombers and four additional explosions occurred

Tuesday, December 10, 2019

Jane Eyres Self

Jane Eyre?s Self-Discovery Essay The novel Jane Eyre, by Charlotte Bronte consists of continuous journey through Janes life towards her final happiness and freedom. Janes physical journeys contribute significantly to plot development and to the idea that the novel is a journey through Janes life. Each journey causes her to experience new emotions and an eventual change of some kind. These actual journeys help Jane on her four figurative journeys, as each one allows her to reflect and grow. Jane makes her journey from Gateshead to Lowood at the age of ten, finally freeing her from her restrictive life with her aunt, who hates her. Jane resented her harsh treatment by her aunt. Mrs. Reeds attitude towards Jane highlights on of the main themes of the novel, the social class. Janes aunt sees Jane as inferior, who is less than a servant. Jane is glad to be leaving her cruel aunt and of having the chance of going to school. At Lowood she wins the friendship of everyone there, but her life is difficult because conditions are poor at the school. She has come to be respected by the teachers and students, largely due to the influence of her teacher, Miss Temple, who has taken a part as a mother, governess, and a companion. Jane has found in Miss temple what Mrs. Reed always denied her. Also at Lowood Jane confront another main theme of the novel, the natural violence, which is depicted by Bronte then typhus kills many of the students including Janes best friend, Helen Burns. This scene is especially important, because it makes Jane stronger, which is appropriate, as mentally strong people cope with violence in a more rational way. As Jane grows up and passes the age of eighteen, she advertises herself as a governess and is hired to a place called Thornfield. Although journeying into the completely unknown, Jane does not look back, only forward to her new life and her freedom at Thornfield. This particular journey marks a huge change in Janes life; its a fresh start for her. Another important journey Jane makes is from Gateshead back to Thornfield having visited her aunt Reed on her deathbed. By then Jane realizes that she loves Rochester. A key theme is raised here, Jane fierce desire to love and to be loved. She feels alone and isolated when she has no friends around her. This is a sharp contrast compared to other characters search for money and social position. These contrasting themes strengthened with every journey she makes. When returning to Thornfield Jane is unhappy, but keeps her promise to Mr. Rochester and his daughter. She believes at this point that Mr. Rochester is going to marry Blanche Ingram, and that she will have to leave Thornfield and never see Mr. Rochester again. However, Mr. Rochester offers his hand in marriage to Jane, but her happiness is short-lived after finding out that he is still married to Bertha. Although so many terrible things are happening to her, her spirit remains unbroken. Jane flees from Thronfield and Mr. Roches ter. Jane hearing Rochesters voice calling her prompts her final journey from St. John to Thornfield. Jane and Rochesters relationship blossoms once again, but differently than before. In the past, Jane felt like an inferior to Rochester because he was her employer and was wealthy. Jane now feels at perfect ease, Rochester has become a better man because of his disabilities. Ultimately, these four journeys mirror Janes four emotional journeys. She transforms from an immature child to an intelligent adult. Jane also changes from innocent and nave to mature and strong-willed person. All of her experiences teach her how to love and feel loved and to discover her true family roots.

Tuesday, December 3, 2019

Open Flow Essay Example

Open Flow Essay Load Balancing Surya Prateek Surampalli Information Technology Department, Southern Polytechnic State University [emailprotected] edu Abstract—in high-traffic Internet today, it is often desirable to have multiple servers that represent a single logical destination server to share the load. A typical configuration comprises multiple servers behind a load balancer that would determine which server would serve the request of a client. Such equipment is expensive, has a rigid set of rules, and is a single point of failure. In this paper, I propose an idea and design for an alternative load-balancing architecture with the help of an OpenFlow switch connected to a NOX controller that gains political flexibility, less expensive, and has the potential to be more robust to failure with future generations of switches I. Introduction In today’s increasingly internet-based cloud services, a client sends a request to URL or a logical server and receives a response from a potentially multiple servers acts as a logical address server. Google server is said to be the best example, the request is sent to server farm as soon as the client resolves the IP address from the URL [1]. Load balancers are expensive that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks [1]. We will write a custom essay sample on Open Flow specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on Open Flow specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on Open Flow specifically for you FOR ONLY $16.38 $13.9/page Hire Writer Since load balancers are not basic equipment and run custom software, policies are rigid in their choices. Specific administrators are required and also the arbitrary policies are not possible to implement. Since running policy and the switch are connected it is reduced to a single point of failure [2]. The order of magnitude will cost less than a commercial load-balancer if architecture with an OpenFlow switch is implemented which is controlled by the commodity server and also provides flexibility for writing patterns which allow the controller to be applied arbitrary political [1]. If the next generation of OpenFlow switches has the capability of connecting to multiple controllers, there is a chance of making the system much robust to abortion by keeping the any server behind the which that acts as the controller [1]. II. Background A. Load Balancing Load balancing helps make networks more efficient. It distributes the processing and traffic evenly across a network, making sure no single device is overwhelmed [1]. Web servers, as in the example above, often use load balancing to evenly split the traffic load among several different servers. This allows them to use the available bandwidth more effectively, and therefore provides faster access to the websites they host [3]. Whether load balancing is done on a local network or a large Web server, it requires hardware or software that divides incoming traffic among the available servers. Networks that receive high amounts of traffic may even have one or more servers dedicated to balancing the load among the other servers and devices in the network. These servers are often called (not surprisingly) load balancers [1]. Load balancing can be performed using dedicated hardware devices such as load balancers or having intelligent DNS servers. A DNS server can redirect traffic data centre with a heavy load or redirect requests made by customers for a data centre that is less network stretches from clients. Many data centres use of expensive hardware load balancing equipment that makes in distributing the network traffic across multiple machines to avoid congestion on a server. A DNS server resolves a hostname to a single IP address where the client sends the request. To the outside world there is a logical address that resolves a host name [3]. This IP address is not associated with a single machine, but is the type of service a client request. DNS can resolve a host name to a load balancer within a data centre. But this could be avoided for safety reasons and to avoid attacks on the device. When a client request comes to the load balancer, the request is redirected according to the policy. B. OpenFlow Switch An OpenFlow switch is a software program or hardware device that forwards packets in a software-defined networking (SDN) environment. OpenFlow switches are either based on the OpenFlow protocol or compatible with it [1]. In a conventional switch, packet forwarding (the data plane) and high-level routing (the control plane) occur on the same device. In software-defined networking, the data plane is decoupled from the control plane. The data plane is still implemented in the switch itself but the control plane is implemented in software and a separate SDN controller makes high-level routing decisions. The switch and controller communicate by means of the OpenFlow protocol. The OpenFlow switch on the other hand uses an external controller called NOX to add rules into its flow table. C. NOX Controller NOX is a network control platform, that provides a high-level programmatic interface upon which network management and control applications can be built. In brevity, NOX is an OpenFlow controller [3]. Therefore, NOX applications mainly assert flow-level control of the network meaning that they determine how each flow is routed or not routed in the network. The OpenFlow switch is connected to the NOX controller and communicates over a secure channel using the OpenFlow protocol. The current design of OpenFlow only allows one NOX controller per switch. The NOX controller decides how packets of a new flow should be handled by the switch. When new flows arrive at the switch, the packet gets redirected to the NOX controller which then decides whether the switch should drop the packet or forward it to a machine connected to the switch. The NOX controller can also delete or modify existing flow entries in the switch. The NOX controller can execute modules that describe how a new flow should be handled. This provides us an interface to write C++ modules that dynamically add or delete routing rules into the switch and can use different policies for handling flows. D. Flow Table A flow table entry of an OpenFlow switch consists of a header fields, counters and actions. Each flow table entry stores Ethernet, IP and TCP/UDP header information. This information includes destination/source MAC and IP address and source/destination TCP/UDP port numbers. Each flow table entry also maintains a counter of number of packets, and bytes arrived per flow. A flow table entry can also have one or more action fields that describe how the switch will handle packets that match the flow entry. Some of the actions include sending the packet on all output ports, forwarding the packet on an output port of a particular machine and modifying packet headers (Ethernet, IP and TCP/UDP header). If a flow entry does not have any actions, then the switch drops all packets for the particular flow. Each Flow entry also has an expiration time after which the flow entry is deleted from the flow table. This expiration time is based on the number of seconds a flow was idle and the total amount the time (in seconds) the flow entry has been in the flow table. The NOX controller can chose a flow entry to exist permanently in the flow table or can set timers which delete the flow entry when the timer expires. III. Load-Balancer Design Load balancing architecture comprises an OpenFlow switch with a control device of NOX and server machines connected to output ports of the switch server. The OpenFlow switch uses an interface to connect to the Internet. Each server has a static IP address and NOX controller maintains a list of currently connected to the OpenFlow switch servers. Each server is running web server emulation on a well known port. [pic] Figure1. Load-balancer architecture using OpenFlow switch and NOX controller The hostname of server to IP address is resolved by each client and a request is sent to that IP address on the known port number. If you consider the above diagram, when a packet is reached to the switch from the client, the header information of the packet is compared with the entries of the flow table. If the header information of the packet corresponds to an inlet of the flow, the counter for the number of packets, the byte count is incremented, and the actions associated with the input of the flow are performed on the packet. If no match is found, the switch forwards the packet to NOX. NOX decides how the packet for this flow should be handled by the switch. NOX and then inserts a new article in the cash flow of the switch using the OpenFlow protocol. To achieve load-balancing features, the modules should be written in C++ that is executed by NOX controller. NOX should perform the function of handle () when a new flow arrives at the switch. This function sets the load balancing policy and adds new rules in the flow table of the switch. All client requests should be destined for the same IP address, then whatever the module is executed by NOX, should add rules for each flow which can modify the destination MAC and IP address of the packet with a server’s MAC and IP address. The switch will forward the packet to the server output port after modifying the packet header. When servers return a packet to the client, the module adds an entry flow that changes the source IP address with the IP address of the host that the client sends its request. So the client should always receive packets from the same IP address. If the client connection / server connection is closed or remains idle for 10 seconds, then the inactivity timer expires causing the input stream to be deleted from the cash flow of the switch. This allows input stream recycling Servers wait for a NOX to register and then report their current load on some schedule similar to the Listener Pattern. NOX in a separate thread listening on a UDP socket for heartbeats with reported by server loads and maintains a table with the current loads of all servers. When applying for a new stream is received, it chooses the server with the lowest and the load current increases to the low current server. This prevents flow of all flows routed to the same server as the server reports a new load. It also breaks ties by turning it into a round robin until the servers report their actual load heartbeat. Flow Algorithm Require: Flow, path 1: sourceHost = LocateSource(flow); 2: destinationHost = LocateDestination(flow); : layer = setToplayer(); 4: currentSwitch = LocateCurrentSwitch(); 5: direction = 1; //upward 6: path = null; //list of switches 7: return search (); This algorithm works as follows. When the OpenFlow controller receives a packet from a switch, it switches the control to the load balancer. Line 1 to 6 introduces the initialization for necessary variables. The load balancer ? rstly a nalyses the packet’s match information including the input port on the switch that receives the packet as well as the packet’s source address and destination address. Then it looks up those addresses using its knowledge about the network topology. Once the source and destination hosts are located, the load balancer calculates the top layer that the ? ow needs to access. We use the search direction ? ag. The ? ag has two values: 1 for upward and 0 for downward. It is initialized to 1. A path is created for saving a route grouped by a list of switches later. Line 7 calls search () that performs the search for paths recursively. In the method search (), It ? rstly adds current switch into path. It returns the path if current search reaches the bottom layer. It reverses the search direction if current search reaches the top layer. Then it calls a method 1: search () { 2: path. add(curSwitch); 4: if isBottomLayer(curSwitch) then 5: return path; 6: end if 7: if curSwitch. getLayer ( ) == layer then 8: direction = 0; //reverse 9: end if 10: links = findLinks(curSwitch, direction); 11: link = findWorstFitLink(links); 12: curSwitch = findNextSwitch(link); 13: return search (); 14:} that returns all links on current switch that are towards current search direction. Only one link is chosen by picking up the worst-? t link with maximum available bandwidth. And then the current switch object is updated. The method search () is called recursively layer by layer from the source to destination. At last the path will be return to the load balancer. The path information will be used for updating ? ow tables of those switches in the path. Flow Scheduling The Flow scheduling functionality works as follows. Each OpenFlow switch maintains its own ? ow table. Whenever any packet comes in, the switch checks the packet’s match information with the entries in its ? ow table. The packet’s match information includes ingressPort, etherType, srcMac, dstMac, vlanID, srcIP, dstIP, IP protocol, T CP/UDP srcPort, TCP/UDP dstPort. If it ? nds a match, it will send out the packet to the corresponding port. Otherwise it will encapsulate the packet in a PACKET IN message and send the message to the controller. As a module of the OpenFlow controller, the load balancer will handle the PACKET IN message. It ? nds a proper path by executing a search with the DLB algorithm described in Algorithm 1. The path is a list of switches from source to destination of the packet. Then the load balancer creates one FLOW MOD message for each switch in the path and sends it to the switch. This message will have the packet’s match information as well as a output port number on that switch. The output port number is directly calculated by the path and network topology. If one switch receives a FLOW MOD message, it will use it to update its ? ow table accordingly. Those packets buffered on ports of that switch may ? nd their matches in the updated ? ow table and be sent out. Otherwise the switch will repeat this process. IV. Future Work The OpenFlow specification includes an optional feature that would allow multiple NOXs to make active connections to the switch. In the case then of the NOX failing, another machine could assume the role of the NOX and continue routing traffic. Naturally the system would need to detect the failure, have a mechanism to remember any state associated with the current policy, and all servers would have to agree on who the new NOX was. These requirements naturally lend themselves to the Paxos consensus algorithm in which policy and leader elections can be held and preserved with provable progress [3]. We have implemented Paxos in another research project and could add it to our server implementation at the controller/signaler layer. As long as at least half of the nodes in the cluster stay up, state will be preserved and traffic should continue to flow. V. Conclusion It is possible to achieve similar functionality to a commercial load balancer switches using only physical commodities. The OpenFlow switch provides the flexibility to implement the arbitrary policy in software and politics separate the switch itself. Since the policy is decoupled from the switch, we can avoid the machine implementation of the policy of a single point of failure and the creation of a more robust system. References [1] OpenFlow Switch Specification. Version 0. 8. 9 (Wire Protocol 0x97). Current maintainer: Brandon Heller ([emailprotected] edu). December 2, 2008. [2] Web caching and Zipf-like distributions: evidence and implications. Breslau, L. Pei Cao Li Fan Phillips, G. Shenker, S. Xerox Palo Alto Res. Center, CA. INFOCOM 1999. [3] Paxos Made Simple. Leslie Lamport [4] M. Al-Fares, A. Loukissas, and A. Vahdat. A Scalable, Commodity Data Center Network Architecture. ACM SIGCOMM, 2008. [5] C. E. Leiserson. Fat-trees: Universal networks for hardware-ef? cient supercomputing. IEEE Transactions on Computers, 1985. [6] T. Benson, A. Anand, A. Akella, and M. Zhang. Understanding Datacenter Traf? c Characteristics. SIGCOMM WREN workshop, 2009. [7] HOPPS, C. Analysis of an Equal-Cost Multi-Path Algorithm. RFC 2992, IETF, 2000. [8] W. J. Dally and B. Towles. Principles and Practices of Interconnection Networks. Morgan Kaufmann Publisher, 2004. [9] S. Kandula, S. Sengupta, A. Greenberg, P. Patel and R. Chaiken. The Nature of Data Center Traf? c: Measurements Analysis. ACM IMC 2009. [10] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. OpenFlow: Enabling Innovation in Campus Networks. ACM SIGCOMM CCR, 2008. [11] R. N. Mysore, A. Pamporis, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat. PortLand: A Scalable, Fault-Tolerant Layer 2 Data Center Network Fabric. ACM SIGCOMM, 2009. [12] Beacon OpenFlow Controller https://OpenFlow. stanford. edu/display/Beacon/Home. [13] B. Lantz, B. Heller, and N. McKeown. A Network in a Laptop: Rapid Prototyping for Software-De? nded Networks. ACM SIGCOMM, 2010. [14] Y. Zhang, H. Kameda, S. L. Hung. Comparison of dynamic and static load-balancing strategies in heterogeneous distributed systems. Computers and Digital Techniques, IEE, 1997. [15] OpenFlow Switch Speci? cation, Version 1. 0. 0. http://www. OpenFlow. org/documents/OpenFlow-spec-v1. 0. 0. pdf. [16] N. Handigol, S. Seetharaman, M. Flajslik, N. McKeown, and R. Johari. Plug-n-Serve: Load-balancing web traf? c using OpenFlow. ACM SIGCOMM Demo, 2009. [17] R. Wang, D. Butnariu, J. Rexford. OpenFlow-Based Server Load Balancing Gone Wild. Hot ICE, 2011. [18] M. Koerner, O. Kao. Multiple service load-balancing with OpenFlow. IEEE HPSR, 2012. Figure 2: Load-Balancer block diagram architecture using OpenFlow switch and NOX controller. Open Flow Essay Example Open Flow Essay Load Balancing Surya Prateek Surampalli Information Technology Department, Southern Polytechnic State University [emailprotected] edu Abstract—in high-traffic Internet today, it is often desirable to have multiple servers that represent a single logical destination server to share the load. A typical configuration comprises multiple servers behind a load balancer that would determine which server would serve the request of a client. Such equipment is expensive, has a rigid set of rules, and is a single point of failure. In this paper, I propose an idea and design for an alternative load-balancing architecture with the help of an OpenFlow switch connected to a NOX controller that gains political flexibility, less expensive, and has the potential to be more robust to failure with future generations of switches I. Introduction In today’s increasingly internet-based cloud services, a client sends a request to URL or a logical server and receives a response from a potentially multiple servers acts as a logical address server. Google server is said to be the best example, the request is sent to server farm as soon as the client resolves the IP address from the URL [1]. Load balancers are expensive that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks [1]. We will write a custom essay sample on Open Flow specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on Open Flow specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on Open Flow specifically for you FOR ONLY $16.38 $13.9/page Hire Writer Since load balancers are not basic equipment and run custom software, policies are rigid in their choices. Specific administrators are required and also the arbitrary policies are not possible to implement. Since running policy and the switch are connected it is reduced to a single point of failure [2]. The order of magnitude will cost less than a commercial load-balancer if architecture with an OpenFlow switch is implemented which is controlled by the commodity server and also provides flexibility for writing patterns which allow the controller to be applied arbitrary political [1]. If the next generation of OpenFlow switches has the capability of connecting to multiple controllers, there is a chance of making the system much robust to abortion by keeping the any server behind the which that acts as the controller [1]. II. Background A. Load Balancing Load balancing helps make networks more efficient. It distributes the processing and traffic evenly across a network, making sure no single device is overwhelmed [1]. Web servers, as in the example above, often use load balancing to evenly split the traffic load among several different servers. This allows them to use the available bandwidth more effectively, and therefore provides faster access to the websites they host [3]. Whether load balancing is done on a local network or a large Web server, it requires hardware or software that divides incoming traffic among the available servers. Networks that receive high amounts of traffic may even have one or more servers dedicated to balancing the load among the other servers and devices in the network. These servers are often called (not surprisingly) load balancers [1]. Load balancing can be performed using dedicated hardware devices such as load balancers or having intelligent DNS servers. A DNS server can redirect traffic data centre with a heavy load or redirect requests made by customers for a data centre that is less network stretches from clients. Many data centres use of expensive hardware load balancing equipment that makes in distributing the network traffic across multiple machines to avoid congestion on a server. A DNS server resolves a hostname to a single IP address where the client sends the request. To the outside world there is a logical address that resolves a host name [3]. This IP address is not associated with a single machine, but is the type of service a client request. DNS can resolve a host name to a load balancer within a data centre. But this could be avoided for safety reasons and to avoid attacks on the device. When a client request comes to the load balancer, the request is redirected according to the policy. B. OpenFlow Switch An OpenFlow switch is a software program or hardware device that forwards packets in a software-defined networking (SDN) environment. OpenFlow switches are either based on the OpenFlow protocol or compatible with it [1]. In a conventional switch, packet forwarding (the data plane) and high-level routing (the control plane) occur on the same device. In software-defined networking, the data plane is decoupled from the control plane. The data plane is still implemented in the switch itself but the control plane is implemented in software and a separate SDN controller makes high-level routing decisions. The switch and controller communicate by means of the OpenFlow protocol. The OpenFlow switch on the other hand uses an external controller called NOX to add rules into its flow table. C. NOX Controller NOX is a network control platform, that provides a high-level programmatic interface upon which network management and control applications can be built. In brevity, NOX is an OpenFlow controller [3]. Therefore, NOX applications mainly assert flow-level control of the network meaning that they determine how each flow is routed or not routed in the network. The OpenFlow switch is connected to the NOX controller and communicates over a secure channel using the OpenFlow protocol. The current design of OpenFlow only allows one NOX controller per switch. The NOX controller decides how packets of a new flow should be handled by the switch. When new flows arrive at the switch, the packet gets redirected to the NOX controller which then decides whether the switch should drop the packet or forward it to a machine connected to the switch. The NOX controller can also delete or modify existing flow entries in the switch. The NOX controller can execute modules that describe how a new flow should be handled. This provides us an interface to write C++ modules that dynamically add or delete routing rules into the switch and can use different policies for handling flows. D. Flow Table A flow table entry of an OpenFlow switch consists of a header fields, counters and actions. Each flow table entry stores Ethernet, IP and TCP/UDP header information. This information includes destination/source MAC and IP address and source/destination TCP/UDP port numbers. Each flow table entry also maintains a counter of number of packets, and bytes arrived per flow. A flow table entry can also have one or more action fields that describe how the switch will handle packets that match the flow entry. Some of the actions include sending the packet on all output ports, forwarding the packet on an output port of a particular machine and modifying packet headers (Ethernet, IP and TCP/UDP header). If a flow entry does not have any actions, then the switch drops all packets for the particular flow. Each Flow entry also has an expiration time after which the flow entry is deleted from the flow table. This expiration time is based on the number of seconds a flow was idle and the total amount the time (in seconds) the flow entry has been in the flow table. The NOX controller can chose a flow entry to exist permanently in the flow table or can set timers which delete the flow entry when the timer expires. III. Load-Balancer Design Load balancing architecture comprises an OpenFlow switch with a control device of NOX and server machines connected to output ports of the switch server. The OpenFlow switch uses an interface to connect to the Internet. Each server has a static IP address and NOX controller maintains a list of currently connected to the OpenFlow switch servers. Each server is running web server emulation on a well known port. [pic] Figure1. Load-balancer architecture using OpenFlow switch and NOX controller The hostname of server to IP address is resolved by each client and a request is sent to that IP address on the known port number. If you consider the above diagram, when a packet is reached to the switch from the client, the header information of the packet is compared with the entries of the flow table. If the header information of the packet corresponds to an inlet of the flow, the counter for the number of packets, the byte count is incremented, and the actions associated with the input of the flow are performed on the packet. If no match is found, the switch forwards the packet to NOX. NOX decides how the packet for this flow should be handled by the switch. NOX and then inserts a new article in the cash flow of the switch using the OpenFlow protocol. To achieve load-balancing features, the modules should be written in C++ that is executed by NOX controller. NOX should perform the function of handle () when a new flow arrives at the switch. This function sets the load balancing policy and adds new rules in the flow table of the switch. All client requests should be destined for the same IP address, then whatever the module is executed by NOX, should add rules for each flow which can modify the destination MAC and IP address of the packet with a server’s MAC and IP address. The switch will forward the packet to the server output port after modifying the packet header. When servers return a packet to the client, the module adds an entry flow that changes the source IP address with the IP address of the host that the client sends its request. So the client should always receive packets from the same IP address. If the client connection / server connection is closed or remains idle for 10 seconds, then the inactivity timer expires causing the input stream to be deleted from the cash flow of the switch. This allows input stream recycling Servers wait for a NOX to register and then report their current load on some schedule similar to the Listener Pattern. NOX in a separate thread listening on a UDP socket for heartbeats with reported by server loads and maintains a table with the current loads of all servers. When applying for a new stream is received, it chooses the server with the lowest and the load current increases to the low current server. This prevents flow of all flows routed to the same server as the server reports a new load. It also breaks ties by turning it into a round robin until the servers report their actual load heartbeat. Flow Algorithm Require: Flow, path 1: sourceHost = LocateSource(flow); 2: destinationHost = LocateDestination(flow); : layer = setToplayer(); 4: currentSwitch = LocateCurrentSwitch(); 5: direction = 1; //upward 6: path = null; //list of switches 7: return search (); This algorithm works as follows. When the OpenFlow controller receives a packet from a switch, it switches the control to the load balancer. Line 1 to 6 introduces the initialization for necessary variables. The load balancer ? rstly a nalyses the packet’s match information including the input port on the switch that receives the packet as well as the packet’s source address and destination address. Then it looks up those addresses using its knowledge about the network topology. Once the source and destination hosts are located, the load balancer calculates the top layer that the ? ow needs to access. We use the search direction ? ag. The ? ag has two values: 1 for upward and 0 for downward. It is initialized to 1. A path is created for saving a route grouped by a list of switches later. Line 7 calls search () that performs the search for paths recursively. In the method search (), It ? rstly adds current switch into path. It returns the path if current search reaches the bottom layer. It reverses the search direction if current search reaches the top layer. Then it calls a method 1: search () { 2: path. add(curSwitch); 4: if isBottomLayer(curSwitch) then 5: return path; 6: end if 7: if curSwitch. getLayer ( ) == layer then 8: direction = 0; //reverse 9: end if 10: links = findLinks(curSwitch, direction); 11: link = findWorstFitLink(links); 12: curSwitch = findNextSwitch(link); 13: return search (); 14:} that returns all links on current switch that are towards current search direction. Only one link is chosen by picking up the worst-? t link with maximum available bandwidth. And then the current switch object is updated. The method search () is called recursively layer by layer from the source to destination. At last the path will be return to the load balancer. The path information will be used for updating ? ow tables of those switches in the path. Flow Scheduling The Flow scheduling functionality works as follows. Each OpenFlow switch maintains its own ? ow table. Whenever any packet comes in, the switch checks the packet’s match information with the entries in its ? ow table. The packet’s match information includes ingressPort, etherType, srcMac, dstMac, vlanID, srcIP, dstIP, IP protocol, T CP/UDP srcPort, TCP/UDP dstPort. If it ? nds a match, it will send out the packet to the corresponding port. Otherwise it will encapsulate the packet in a PACKET IN message and send the message to the controller. As a module of the OpenFlow controller, the load balancer will handle the PACKET IN message. It ? nds a proper path by executing a search with the DLB algorithm described in Algorithm 1. The path is a list of switches from source to destination of the packet. Then the load balancer creates one FLOW MOD message for each switch in the path and sends it to the switch. This message will have the packet’s match information as well as a output port number on that switch. The output port number is directly calculated by the path and network topology. If one switch receives a FLOW MOD message, it will use it to update its ? ow table accordingly. Those packets buffered on ports of that switch may ? nd their matches in the updated ? ow table and be sent out. Otherwise the switch will repeat this process. IV. Future Work The OpenFlow specification includes an optional feature that would allow multiple NOXs to make active connections to the switch. In the case then of the NOX failing, another machine could assume the role of the NOX and continue routing traffic. Naturally the system would need to detect the failure, have a mechanism to remember any state associated with the current policy, and all servers would have to agree on who the new NOX was. These requirements naturally lend themselves to the Paxos consensus algorithm in which policy and leader elections can be held and preserved with provable progress [3]. We have implemented Paxos in another research project and could add it to our server implementation at the controller/signaler layer. As long as at least half of the nodes in the cluster stay up, state will be preserved and traffic should continue to flow. V. Conclusion It is possible to achieve similar functionality to a commercial load balancer switches using only physical commodities. The OpenFlow switch provides the flexibility to implement the arbitrary policy in software and politics separate the switch itself. Since the policy is decoupled from the switch, we can avoid the machine implementation of the policy of a single point of failure and the creation of a more robust system. References [1] OpenFlow Switch Specification. Version 0. 8. 9 (Wire Protocol 0x97). Current maintainer: Brandon Heller ([emailprotected] edu). December 2, 2008. [2] Web caching and Zipf-like distributions: evidence and implications. Breslau, L. Pei Cao Li Fan Phillips, G. Shenker, S. Xerox Palo Alto Res. Center, CA. INFOCOM 1999. [3] Paxos Made Simple. Leslie Lamport [4] M. Al-Fares, A. Loukissas, and A. Vahdat. A Scalable, Commodity Data Center Network Architecture. ACM SIGCOMM, 2008. [5] C. E. Leiserson. Fat-trees: Universal networks for hardware-ef? cient supercomputing. IEEE Transactions on Computers, 1985. [6] T. Benson, A. Anand, A. Akella, and M. Zhang. Understanding Datacenter Traf? c Characteristics. SIGCOMM WREN workshop, 2009. [7] HOPPS, C. Analysis of an Equal-Cost Multi-Path Algorithm. RFC 2992, IETF, 2000. [8] W. J. Dally and B. Towles. Principles and Practices of Interconnection Networks. Morgan Kaufmann Publisher, 2004. [9] S. Kandula, S. Sengupta, A. Greenberg, P. Patel and R. Chaiken. The Nature of Data Center Traf? c: Measurements Analysis. ACM IMC 2009. [10] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. OpenFlow: Enabling Innovation in Campus Networks. ACM SIGCOMM CCR, 2008. [11] R. N. Mysore, A. Pamporis, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat. PortLand: A Scalable, Fault-Tolerant Layer 2 Data Center Network Fabric. ACM SIGCOMM, 2009. [12] Beacon OpenFlow Controller https://OpenFlow. stanford. edu/display/Beacon/Home. [13] B. Lantz, B. Heller, and N. McKeown. A Network in a Laptop: Rapid Prototyping for Software-De? nded Networks. ACM SIGCOMM, 2010. [14] Y. Zhang, H. Kameda, S. L. Hung. Comparison of dynamic and static load-balancing strategies in heterogeneous distributed systems. Computers and Digital Techniques, IEE, 1997. [15] OpenFlow Switch Speci? cation, Version 1. 0. 0. http://www. OpenFlow. org/documents/OpenFlow-spec-v1. 0. 0. pdf. [16] N. Handigol, S. Seetharaman, M. Flajslik, N. McKeown, and R. Johari. Plug-n-Serve: Load-balancing web traf? c using OpenFlow. ACM SIGCOMM Demo, 2009. [17] R. Wang, D. Butnariu, J. Rexford. OpenFlow-Based Server Load Balancing Gone Wild. Hot ICE, 2011. [18] M. Koerner, O. Kao. Multiple service load-balancing with OpenFlow. IEEE HPSR, 2012. Figure 2: Load-Balancer block diagram architecture using OpenFlow switch and NOX controller.

Wednesday, November 27, 2019

What is Cervical Cancer †Health Essay

What is Cervical Cancer – Health Essay Free Online Research Papers What is Cervical Cancer Health Essay Cervical cancer is the cancer of the cervix. â€Å"Cancer is a class of diseases or disorders characterized by uncontrolled division of cells and the ability of these cells to invade of the tissues, either by direct growth into adjacent tissue through invasion or by implantation into distant sites by â€Å"metastasis†.† (Wikipedia) The cervix is the lower narrow part of the uterus. The uterus is where a baby grows during a woman’s pregnancy. The cervix forms the pathway that opens into the vagina, which leads outside the body. Cervical cancer is a very dangerous disease that can be prevented by getting regular pap smear tests and pelvic exams. Cervical cancer develops in the lining of the cervix; this condition usually develops over time. Normal cervical cells gradually go through changes to become precancerous and then cancerous. Cervical intraepithelial neoplasia (CIN) is the term used to describe the changes. CIN is used to classify the degree of cell abnormality. Low-grade CIN means minimal change in the cells and high-grade CIN means there’s a greater degree of abnormality. (Yarbro) Cancer of the cervix is the second most common worldwide, next to breast cancer, and is a leading cause of cancer-related death in women in underdeveloped countries. Invasive cervical cancer is more common in women middle aged and older and in women of poor socioeconomic status, who are less likely to receive regular screening and early treatment. There is also a higher rate of incidence among African American, Hispanic, and Native American women. (Hales)The cause of cervical cancer is the human papillomavirus (HPV), which is transmitted sexually. Evidence of HPV is found in nearly 80% of cervical carcinomas. (Yarbro) Having multiple sexual partners, history of STD’s, and sexual intercourse at a young age are all sexual activities that increase risk of the HPV infection. Research Papers on What is Cervical Cancer - Health EssayPersonal Experience with Teen PregnancyInfluences of Socio-Economic Status of Married MalesStandardized TestingGenetic EngineeringHip-Hop is Art19 Century Society: A Deeply Divided EraCapital PunishmentMarketing of Lifeboy Soap A Unilever ProductThe Fifth HorsemanResearch Process Part One

Saturday, November 23, 2019

DNA In Forensic Science Essays

DNA In Forensic Science Essays DNA In Forensic Science Essay DNA In Forensic Science Essay Over the years, many different advances in technology have made the use of DNA in forensic science possible. In the past twenty years specifically, there have been many extraordinary discoveries In the fields of science that have led to the advancement of procedures in forensics. Before DNA testing, the most accurate way of identifying people was to match the blood types of suspects with blood found at the scene of the crime. Considering the lack of variability of this procedure, it is no surprise Just how important the use of DNA in forensics has become. The evolution of applying DNA jesting to forensics can be traced by looking at Polymerase Chain Reactions, DNA Fingerprinting and the Innocence Project. For Instance, the history behind how DNA became a reliable tool In forensics goes all the way back to when DNA was first discovered. In the year 1869, a German chemist named Frederica Miseries first discovered DNA, which he called nucleic acid Monsoon, 2013). However, it wasnt until 1953 that biologists were finally convinced by Alfred Hershey and Martha Chase of Donas importance as the genetic material in organisms (2013). One year later, James Watson and Francis Crick deduced the Truckee of the DNA molecule. They proposed that It Is a double helix with complementary nucleotide sequences (2013). Nonetheless, the most critical development in working towards using DNA in forensics was when Kara Mulls created the Polymerase Chain Reaction in 1983 (2013). Furthermore, the Polymerase Chain Reaction, or PC, was the breaking point for using DNA in forensic science. PC is a process that allows extremely small samples of DNA to become useful. This Is done by taking a double stranded DNA fragment and making It Into two single stranded fragments. These two single stranded reagents are then copied, which creates two double stranded DNA fragments. This procedure is then repeated until there is enough DNA for analysis (2013). PC is so powerful that a single hair will do (2013). Consequently, PC could not truly be applied in forensics until DNA Fingerprinting was developed (Dale, Greenshank, rooks, 2006). DNA Fingerprinting was Invented by Aleck Jeffrey three years after Kara Mulls developed PC (2013). Like the fingerprints that came Into use by detectives and police labs during the sass, each person has a unique DNA fingerprint (Betsey, 994). DNA Fingerprinting is a process used in forensic science to identify people at a crime scene, and to tell how many people were present at the scene. This is done by exposing a DNA fragment to a radioactively tagged probe and any complementary strands that occur in the fragment will bind to the probe. The result Is a set of barded-Like lines that Is the DNA fingerprint (2013). It Is obvious how PC would come in handy in the DNA Fingerprinting process. If there is not enough DNA present for analysis, PC could be applied in order to create a useable sample from the DNA o that the DNA Fingerprinting can be applied (PC Introduction, 2009). These many advances made the start of the Innocence Project possible. Founded In 1 992 by the lawyers Barry Check and Peter Enfield (201 3), the Innocence Project Is an organization dedicated to exonerating wrongfully convicted future injustice (The Innocence Project). One case example is the case of Orlando Bouquet. He was convicted for attempted sexual battery and burglary on May 23rd, 2006 and was quickly released on August 22nd of the same year after DNA testing on the victims clothing proved that he was not the man who committed the crime (The Innocence Project). Another example is the conviction of Steven Barnes in 1989 for a murder he did not commit. He was convicted based on questionable eyewitness identifications and three types of forensic science that had not been validated. Nearly two decades later, DNA testing obtained by the Innocence Project proved his innocence and he walked away as a free man on November 25th, 2008 (The Innocence Project). The Innocence Project has freed hundreds of convicted people over the past ten years (2013). This Just goes to show how important DNA testing in forensics has truly become. PC Amplification, DNA Fingerprinting, and the Innocence Project are Just a few of the uses that have come from the numerous discoveries concerning DNA. The use of DNA in forensics would not be possible without the help of the people that made critical findings concerning DNA, the use of PC Amplification and DNA Fingerprinting, and it also served as the genesis of the Innocence Project. Thanks to the people that contributed to the discovery of DNA, its purpose, its structure, and its many uses, today there are several things that DNA is an essential part of. The development of PC Amplification was a dire step in using DNA as a key part of forensic science. DNA Fingerprinting is a more efficient, less expensive process that has become a very common tool in forensics, and the Innocence Project has become a pillar of the American criminal Justice system (2013). The advances in science and technology over the past twenty years have had a major impact on many diverse parts of society. The advancements of DNA research are particularly noteworthy. With the help of PC Amplification and DNA Fingerprinting, hundreds of men and omen that were wrongfully accused of a crime have been set free, and the true culprits have finally been put behind bars. It is astounding how far scientists have already come in their research, and it is mind-boggling to think about Just how far their discoveries have yet to go.

Thursday, November 21, 2019

Using a country of your choice as an example demonstrate how the Essay

Using a country of your choice as an example demonstrate how the government seeks to compensate for market failures.(Japan) - Essay Example Externality effects would gradually become global as globally integrated markets develop. As externalities become huge they pose challenge to achieving macroeconomic stability which in turn challenges the international political architecture. ‘Efficient’ allocation of resources according to economists implies that all possible mutually beneficial trades have been exhausted (Holtom, 2011). This means that proper coordination between willing buyers and sellers has been accomplished. The nature and extent of market failure determines the role that government would play and whether government intervention is at all necessary. Markets rarely correspond to the ideals of a perfectly competitive market as defined by the economic theory (Rama and Harvey). These imperfectly competitive markets may have efficiently allocated resources to derive the best value. Certain conditions termed as ‘market failures’ render government intervention necessary. While failure to syst ematically allocate resources is evidence of inefficient allocation of resources but this is not sufficient reason to justify government intervention. Government intervention in markets can be costly and the benefits must far outweigh the costs if government were to intervene. However, some governments believe that the role of government is benevolent during such externalities (Dolfsma, 2011). In fact institutional economics believe that market cannot function unless they are embedded in a broader set of interrelated institutions. However, government interventions can reduce efficiency through unintended consequences such as distortionary taxes, special interests or maybe just simple errors of judgment (Holtom, 2011). All market failures do not warrant policy action and hence the cost-benefit analysis is essential. A market-oriented economy may produce income inequalities. A person may produce some very efficient product which benefits the society but there is no gain for the poorer people of the society. Moreover it is not possible to exclude non-payers from utilizing a ‘public good’. However, market failures occur when an inefficiently high or low amount of good in question is produced and is directed to markets where they do not receive the desired value (Holtom, 2011). This reduces in value the perfect market conditions. This can be applied to the entertainment and the theme park industry in Japan. Japan is known for the largest global growth for theme parks and the amusement industry. Tokyo Disneyland (TDL) demonstrated solid performance and made a substantial impact on the host economy (Kawamura and Hara, 2010). Being part of the tourism industry they brought in extensive cash flow from the non-resident tourist. However, the rush of theme parks in Japan overlapped with the bubble economy in the late 1980s and early 1990s. Local governments in Japan suffered as an effect of deindustrialization following the bubble economy. Market failures in the theme park industry led to government intervention in several ways but these were found to be counter productive. To revitalize the local economy the development of theme parks was considered essential. Resources were inefficiently allocated to make the theme parks sustainable and help the local economies. Abundance of construction loans were given for theme parks. In addition, the central governments paid subsidies to the local governments and the