936 090 documents trouvés
Requête brute : https://api.istex.fr/document?q=corpusName.raw:"springer-ebooks"&size=10&from=0&rankBy=qualityOverRelevance&output=corpusName,title,doi,accessCondition.contentType,fulltextUrl,host.title,host.genre,author,abstract,genre,publicationDate,arkIstex,fulltext,metadata,annexes,enrichments&sid=istex-search&facet=corpusName[*],language[*],publicationDate,host.genre[*],genre[*],enrichments.type[*],categories.wos[*],categories.scienceMetrix[*],categories.scopus[*],categories.inist[*],qualityIndicators.pdfWordCount,qualityIndicators.pdfCharCount,qualityIndicators.score,qualityIndicators.pdfVersion[*],qualityIndicators.refBibsNative,qualityIndicators.abstractCharCount[1-1000000],qualityIndicators.pdfText,qualityIndicators.tdmReady,qualityIndicators.teiSource
Langue de publication
lodex (100 %)
cortext (100 %)
gargantext (100 %)
nooj (100 %)
Abstract: Cloud computing has the potential to improve resource efficiency by consolidating many virtual computers onto each physical host. This economization is based on the assumption that a significant percentage of virtual machines are indeed not fully utilized. Yet, despite the much acclaimed pay-only-for-what-you-use paradigm, public IaaS cloud customers are usually still billed by the hour for virtual systems of uncertain performance rather than on the basis of actual resource usage. Because ensuring and proving availability of defined performance for collocated multi-tenant VMs poses a complex technical problem, providers are still reluctant to provide performance guarantees. In lack thereof, prevailing cloud products range in the low price segment, where providers resort to overbooking and double selling capacity in order to maintain profitability, thereby further harming trust and cloud adoption. In this paper we argue that the predominant flat rate billing in conjunction with the practice of overbooking and its associated mismatch between actual costs and billed posts results in a substantial misalignment between the interests of providers and customers that stands in the way of trustworthy and sustainable cloud computing. On these grounds, we propose a hybrid IaaS pricing model that aims to avoid these problems in a non-technical fashion by shifting to consumption based billing on top of credible minimum performance. Requiring only measures that can be obtained with a low degree of technical complexity as well as a moderate amount of trust, the approach aspires to be more sustainable, practicable and billable than common practice even without the use of complex should-I verifiability.
Abstract: Stabilization, trim, and control devices are the prerequisites for flyability and controllability of aerospace flight vehicles. Regarding re-entry flight of RV-W’s, we have mentioned in Section 2.1.2 that initially, at high altitudes, the reaction control system (RCS) is the major flight control system. Aerodynamic trim, stabilization and control surfaces take over further down on the trajectory. This is in contrast to CAV’s, where aerodynamic stabilization and control surfaces are the only devices. Obviously, deploying such devices result in a strong coupling of the thrust vector (and the aerothermoelasticity of the airframe), Sub-Section 2.2.3, into the flight dynamics, trim and control of the vehicle. Moreover, for ARV’s, the trajectory is such that they reach high altitudes in a situation similar to RW-V’s so that control surface effectiveness eventually is diminished and reaction control systems (RCS) have to be deployed. In other words, ARV’s require two different control systems. On the other hand, the capsule RV-NW’s, as a rule, have only RCS for flight control. We begin this chapter with an introduction to trim and control surface aerothermodynamics, and concentrate then on the onset flow characteristics. Next treated is the asymptotic behavior of pressure, the thermal state of the surface and wall shear stress on the control surface, approximated here as a ramp. Related issues of reaction control systems are discussed briefly. Configurational considerations are presented regarding the discussed trim and control devices. Finally the results are summarized and simulation issues are examined. Our aim is to foster the understanding of the flow phenomena involved in the operation of stabilization, trim, and control devices and the problems related to their simulation with experimental and computational means.
Abstract: Modern networks assemble an ever growing number of nodes. However, it remains difficult to increase the number of channels per node, thus the maximal degree of the network may be bounded. This is typically the case in grid topology networks, where each node has at most four neighbors. In this paper, we address the following issue: if each node is likely to fail in an unpredictable manner, how can we preserve some global reliability guarantees when the number of nodes keeps increasing unboundedly ? To be more specific, we consider the problem or reliably broadcasting information on an asynchronous grid in the presence of Byzantine failures – that is, some nodes may have an arbitrary and potentially malicious behavior. Our requirement is that a constant fraction of correct nodes remain able to achieve reliable communication. Existing solutions can only tolerate a fixed number of Byzantine failures if they adopt a worst-case placement scheme. Besides, if we assume a constant Byzantine ratio (each node has the same probability to be Byzantine), the probability to have a fatal placement approaches 1 when the number of nodes increases, and reliability guarantees collapse. In this paper, we propose the first broadcast protocol that overcomes these difficulties. First, the number of Byzantine failures that can be tolerated (if they adopt the worst-case placement) now increases with the number of nodes. Second, we are able to tolerate a constant Byzantine ratio, however large the grid may be. In other words, the grid becomes scalable. This result has important security applications in ultra-large networks, where each node has a given probability to misbehave.
Abstract: Reactive jamming in an underwater sensor network (UWSN) environment is a realistic and very harmful threat. It, typically, affects only a small part of a packet (not the entire one), in order to maintain a low detection probability. Prior works on reactive jamming detection were focused on terrestrial wireless sensor networks (TWSNs), and are limited in their ability to (a) detect it correctly, (b) distinguish the small corrupted part from the uncorrupted part of a packet, and (c) be adaptive with dynamic environment. Further, there is currently a need for a generalized framework for jamming detection that outlines the basic operations governing it. In this paper, we address these research lacunae by broadly designing such a framework for jamming detection, and specifically a detection scheme for reactive jamming. A key characteristic of this work is introducing the concept of partial-packet (PP) in jamming detection. The introduction of such an approach is unique – the existing works rely on holistic packet analysis, which degrades their performance – a fundamental issue that would substantially affect achieving real-time performance. We estimate the probability of high deviation in received signal strength (RSS) using a weak estimation learning scheme, which helps in absorbing the impact of dynamic environment. Finally, we perform CUSUM-test for reactive jamming detection. We evaluate the performance of our proposed scheme through simulation studies in UWSN environment. Results show that, as envisioned, the proposed scheme is capable of accurately detecting reactive jamming in UWSNs, with an accuracy of 100% true detection, while the average detection delay is substantially less.
Abstract: Traffic classification has received increasing attention in the last years. It aims at offering the ability to automatically recognize the application that has generated a given stream of packets from the direct and passive observation of the individual packets, or stream of packets, flowing in the network. This ability is instrumental to a number of activities that are of extreme interest to carriers, Internet service providers and network administrators in general. Indeed, traffic classification is the basic block that is required to enable any traffic management operations, from differentiating traffic pricing and treatment (e.g., policing, shaping, etc.), to security operations (e.g., firewalling, filtering, anomaly detection, etc.). Up to few years ago, almost any Internet application was using well-known transport layer protocol ports that easily allowed its identification. More recently, the number of applications using random or non-standard ports has dramatically increased (e.g. Skype, BitTorrent, VPNs, etc.). Moreover, often network applications are configured to use well-known protocol ports assigned to other applications (e.g. TCP port 80 originally reserved for Web traffic) attempting to disguise their presence. For these reasons, and for the importance of correctly classifying traffic flows, novel approaches based respectively on packet inspection, statistical and machine learning techniques, and behavioral methods have been investigated and are becoming standard practice. In this chapter, we discuss the main trend in the field of traffic classification and we describe some of the main proposals of the research community. We complete this chapter by developing two examples of behavioral classifiers: both use supervised machine learning algorithms for classifications, but each is based on different features to describe the traffic. After presenting them, we compare their performance using a large dataset, showing the benefits and drawback of each approach.
Abstract: Computer networks, widely used by enterprises and individuals nowadays, are still vulnerable when facing traffic injection, human mistakes, malicious attacks and other failures though we spend much more time and cost on security, dependability, performability, survivability, and risk assessment to make the network provide resilient services. This is because these measures are commonly viewed as closely related but a practical means of linking them is often not achieved. Network resilience research brings together all the planning that the network can be managed at a holistic view of resilience management. This paper focuses on network resilience management from “reactive” paradigm to a “proactive” one through Situational Awareness (SA) of internal factors of network and external ones of complex, dynamic and heterogeneous network environment. After surveying the research of network resilience and resilience assessment in the network, we give a model to discuss how to construct awareness of resilience issues which includes four stages. The first step is to get the situational elements about what we are interested in. Second, to understand what happened and what is going on in the networks, pattern learning and pattern matching are exploited to identify challenge. Then, to make proactive resilience management, we need to predict challenges and look for potential ones at this stage. At the fourth stage, resilience management can help take actions of remediation and recovery according to the policy of defender and attacker. After that, the two players’ behaviors of defender and attacker are modeled in the same model by using Extended Generalized Stochastic Game Nets (EGSGN) which combines Game theory into Stochastic Petri Nets. Finally, we give a case study to show how to use EGSGN to depict the network resilience situation in the same model.
Abstract: In this paper we address the task of finding convex cuts of a graph. In addition to the theoretical value of drawing a connection between geometric and combinatorial objects, cuts with this or related properties can be beneficial in various applications, e.g., routing in road networks and mesh partitioning. It is known that the decision problem whether a general graph is k-convex is $\mathcal{NP}$ -complete for fixed k ≥ 2. However, we show that for plane graphs all convex cuts (i.e., k = 2) can be computed in polynomial time. To this end we first restrict our consideration to a subset of plane graphs for which the so-called alternating cuts can be embedded as plane curves such that the plane curves form an arrangement of pseudolines. For a graph G in this set we formulate a one-to-one correspondence between the plane curves and the convex cuts of a bipartite graph from which G can be recovered. Due to their local nature, alternating cuts cannot guide the search for convex cuts in more general graphs. Therefore we modify the concept of alternating cuts using the Djoković relation, which is of global nature and gives rise to cuts of bipartite graphs. We first present an algorithm that computes all convex cuts of a (not necessarily plane) bipartite graph H′ = (V,E) in $\mathcal{O}(|E|^3)$ time. Then we establish a connection between convex cuts of a graph H and the Djoković relation on a (bipartite) subdivision H′ of H. Finally, we use this connection to compute all convex cuts of a plane graph in cubic time.
Abstract: We consider the question of constructing pseudorandom generators that simultaneously have linear circuit complexity (in the output length), exponential security (in the seed length), and a large stretch (linear or polynomial in the seed length). We refer to such a pseudorandom generator as an asymptotically optimal PRG. We present a simple construction of an asymptotically optimal PRG from any one-way function f:{0,1} n → {0,1} n which satisfies the following requirements: 1. f can be computed by linear-size circuits; 2. f is 2 βn -hard to invert for some constant β > 0, and the min-entropy of f(x) on a random input x is at least γn for a constant γ > 0 such that β/3 + γ > 1. Alternatively, building on the work of Haitner, Harnik and Reingold (SICOMP 2011), one can replace the second requirement by: 2 ′ . f is 2 βn -hard to invert for some constant β > 0 and it is regular in the sense that the preimage size of every output of f is fixed (but possibly unknown). Previous constructions of PRGs from one-way functions can do without the entropy or regularity requirements, but even the best such constructions achieve slightly sub-exponential security (Vadhan and Zheng, STOC 2012). Our construction relies on a technical result about hardcore functions that may be of independent interest. We obtain a family of hardcore functions $\mathcal H = \{h:\{0,1\}^n\to\{0,1\}^{\alpha n}\}$ that can be computed by linear-sized circuits for any 2 βn -hard one-way function f:{0,1} n → {0,1} n where β > 3α. Our construction of asymptotically optimal PRGs uses such hardcore functions, which can be obtained via linear-size computable affine hash functions (Ishai, Kushilevitz, Ostrovsky and Sahai, STOC 2008).
Abstract: In the previous chapter, we reconstructed a change log consisting of compound change operations that represent differences between different process model versions. Before we can apply the compound change operations in order to merge different process model versions, possible dependencies between change operations must be identified. Informally if two change operations are dependent, then the second one requires the application of the first one. For instance, before an activity can be inserted into a new fragment, the fragment itself must be inserted. Otherwise it can happen that applying a dependent change operation leads to a potentially unconnected process model and problems when applying following change operations. For example, inserting an activity into a fragment that does not exist yet, leads to an unconnected activity and problems when later inserting the fragment. In this chapter, we introduce our approach to dependency analysis between compound change operations of process models. We begin by defining requirements for dependencies in Section 7.1. We approach the identification of dependencies by applying existing theory on dependent graph transformations and establish the notion of transformation dependencies between compound change operations in Section 7.2. In Section 7.3, we then show how the dependency detection can be further improved and introduce the concept of Joint - PS T dependencies between change operations. Using dynamic specification of compound change operations, this approach results in fewer dependencies between change operations and thus more freedom when merging different process model versions. Finally, we conclude with a summary and discussion in Section 7.4. The following sections of this chapter are partially based on our earlier publications [Küster et al., 2009, Küster et al., 2010].
Abstract: With the advent of the Cloud Computing (CC) paradigm and the explosion of new Web Services proposed over the Internet (such as Google Office Apps, Dropbox or Doodle just to cite a few of them), the protection of the programs at the heart of these services becomes more and more crucial, especially for the companies making business on top of these services. In parallel, the overwhelming majority of modern websites use the JavaScript programming language as all modern web browsers - either on desktops, game consoles, tablets or smart phones - include JavaScript interpreters making it the most ubiquitous programming language in history. Thus, JavaScript is the core technology of most web services. In this context, this article focuses on novel obfuscation techniques to protect JavaScript program contents. Informally, the goal of obfuscation is to make a program ”unintelligible” without altering its functionality, thus preventing reverse-engineering on the program. However, this approach hardly caught attention from the research community after stand-alone obfuscation for arbitrary programs has been proven impossible in 2001. Here we would like to renew this interest with the proposal of JShadObf, an obfuscation framework based on evolutionary heuristics designed to optimize for a given input JavaScript program, the sequence of transformations that should be applied to the source code to improve its obfuscation capacity. Measuring this capacity is based on the combination of several metrics optimized simultaneously withMulti-Objective Evolutionary Algorithms (MOEAs). Whereas our approach cannot pretend to offer an absolute protection, the objective remains to protect the target program for a sufficiently long period of time. The experiment results initially conducted on a pedagogical example then on JQuery - the most popular and widely used JavaScript library - outperform existing solutions. It demonstrates the validity of the approach and its concrete usage in reference codes used worldwide.
PAGE
SUR 1 000