welcome to the humble business opportunities and business, this simple may be useful.
maybe in your heart all that eager to do business or already in business would want to understand how to do business that good ? because of the way that it is good business will run well too. okay go ahead, following a good way of doing business :
determine what the business will be run.
after determining that the business will be run, then find out what is needed. eg business will sell supatu, roughly get these shoes from which later will be sold where. when the food business, food ingredients was what.
after that, you find out the weaknesses and shortcomings of this business. then find out what solutions do.
create a concept as to what business you run and what kind of marketing technique ? because in a business, marketing is something that is very important. as well as the marketing of good was able to make a business forward.
calculate the issued capital and approximately how much profit can be earned.
when the promised benefits, run the business.
learn the concept of business opportunities rudy setiawan, world marketer indonesian origin who want to help you live well earned through smart systems and easy to apply, click here
thats a good way of doing business, hopefully the above ways of doing business can be beneficial. remember do not be undue haste in business, think carefully of business you run. because with proper planning, businesses can run smoothly according to plan. but when it is in fact a business does not go as planned, do not be confused. because it is sometimes incompatible with the theory of practice. progressive persist, hopefully the steps or ways of doing business are well above can increasingly make us know what needs to be done in the business.
Sunday, 28 July 2013
Tuesday, 23 July 2013
How to Make a Cane Bread Curry Middle East
Curry cane bread is distinctive middle eastern food. with the aroma of spices and strong sensation of the tongue is able to attract enthusiasts middle eastern cuisine.
Cane bread ingredients :
1. super cake palmia margarine 45 gr
2. red badge of flour 350 gr
3. water 150 gr
4. 5 grams of salt
5. orchid butter coatings ( for each pcs ) 25 gr
6. keseluruhan dough 850 gr
7. orchid butter celery leaves coating ( chopped to taste ) 250 gr
Kare kambing ingredients :
1. minced garlic 100 gr
2. diced mutton 150 gr
3. indofood bumbu kare 50 gr
4. chopped onions 50 gr
5. santan 100 gr
6. in addition the company edible oil 50 gr
7. lada15 gr
8. keseluruhan dough 550 gr
9. salt 10 gr
10. sugar 10 gr
11. indofood broth powder 15 gr
how to make bread cane :
1. combine all ingredients until smooth.
2. put margarine super cake palmia and stir until blended.
3. combine eggs and pour water
4. knead the dough until smooth and not sticky.
5. cut dough dough weighing and weigh 60 grams.
6. allow the dough for 4 hours then cover with cloth or plastic.
7. dough rollers and print with a diameter of 30 cm.
8. combine kontent with orchid butter.
9. roll untir elongated and formed a snail and let stand for 30 minutes.
10. milled dough with a rolling pin and bake the dough with non-stick skillet until cooked.
11. serve with curry goat.
how to make goat curry :
1. mix garlic, chutney, and tumislah until really fragrant
2. enter mutton and spices kare indofood.
3. stir-fry until fragrant
4. inputs coconut milk and stir until smooth
5. add salt, sugar, pepper and broth indofood.
6. cook until done and ready to be served with fried onions as garnish.
7. cane bread enjoyed by curry goat
Sunday, 21 July 2013
How to Make Green Noodles
Green noodle is healthy noodles that you can now create your own for domestic industry.
green noodles recipe, how to make green noodles
The ingredients to make green noodles
1. high protein flour 500 gr
2. 1 btir chicken eggs
3. spinach 2 tie
4. 4 tbsp oil
5. water
6. salt
7. display devices noodles, if you do not have to use a rolling pin and a knife.
8. starch
How to Make Green Noodles
1. take the spinach leaves, separate them with its trunk,
2. rinse and bland using 100 ml of water,
3. take the juice
4. combine flour, oil, eggs and salt, knead the dough while adding spinach juice earlier.
5. knead dough until smooth
6. after it is cut and ready to be printed in the milling noodles.
Green noodles that i used in making green meatball noodles. you may mengkreasikannya own and modify it. hopefully this recipe green noodles in a green way of making noodles.
Wednesday, 17 July 2013
What Is Education?
Nationally, education is a means to unite all citizens into a nation. through education, every student facilitated, guided and nurtured to become citizens who recognize and realize their rights and obligations. education is also a powerful tool to make every student can sit low and stood at the same height.
The following are the terms and definitions of education :
Education is a conscious and deliberate effort to create an atmosphere of learning and the learning process so that learners are actively developing the potential for him to have the spiritual strength of religious, self-control, personality, intelligence, noble character and skills needed themselves, society, nation, and state.
Saturday, 13 July 2013
Important Facts About Android
thousand android phone knickknacks - still regarding android, android is a smartphone operating system with a touch screen like the iphone ios and blackberry os. android was developed by the company and the great google first appeared in 2007 with its first mobile phone t-mobile g1.
does android phone called the droids ?
no. droid is a brand owned by verizon wireless android phone ( droid x, droid eris, droid incredible, and so on ). sprint htc evo 4g is not a droid but still an android smartphone.
why choose android phone instead of an iphone ?
one reason is that the android integrated with google services like gmail, google calendar, google contacts and google voice and perfect for anyone who uses google services.
one of the things of android is the first time you turn it prompted google username and password and all the google message, contacts and other informtion start synchronized to the handset without sync to the desktop. moreover android is an open source where were free to do anything about the application.
whats so special about android ?
unlike apples os, research in motion ( rim ) or microsoft windows mobile, google released android as an open-source os under the auspices of the open handset alliance. the other thing is the android os is very good, fast and strong and has an intuitive user interface that is packed with choice and flexibility. google continues to develop.
what are the different versions of android, such as donut, cupcake and froyo ?
just like apple ios, google android continue to up-date with new features. the latest version is android 2. 2 codenamed froyo adding features direct usb tethering, mobile hotspot functionality, and support for flash. this means flash video and flash module that does not work on the iphone will work on the android web browser. version 1. 6 is called donut which adds speed, increased screen resolution and camera and recorder applications faster. version 1. 5 cupcakes add video recorder.
if there is android 2. 2 why trapped version 2. 1 even 1. 6 ?
one weakness is so much provide a google android version. manufacturing and operators can decide to upgrade their phones to the latest version of android.
on the other hand, the iphone has a few versions in which to launch a new version of ios can be done at once and more easily.
how many apps are available for android ?
approximately 70 thousand more and continues to increase every day. google does not give special treatment on their application.
latest ponselandroid ?
there is the htc evo 4g. samsung will release android phones galaxy s-class : thin and lightweight, super amoled screen 4-inch high contrast, and is available on all major u. s. carriers. if looking for an android phone qwerty slide-out, consider verizon motorola droid 2 or samsung epic 4.
kelemahanandroid ?
concerning music and video android does not have an official media to desktop syncing client. however, in general, android gives you more options on how to set up your phone and its contents.
how to choose an android phone ?
android phones available with a variety of forms. do you want a qwerty phone or prefer the touchscreen ? looking for an easy big screen or enter the pocket ? are your users e-mail and text messages or interested in watching movies and videos on the big screen ? all questions were answered by the current android product
does android phone called the droids ?
no. droid is a brand owned by verizon wireless android phone ( droid x, droid eris, droid incredible, and so on ). sprint htc evo 4g is not a droid but still an android smartphone.
why choose android phone instead of an iphone ?
one reason is that the android integrated with google services like gmail, google calendar, google contacts and google voice and perfect for anyone who uses google services.
one of the things of android is the first time you turn it prompted google username and password and all the google message, contacts and other informtion start synchronized to the handset without sync to the desktop. moreover android is an open source where were free to do anything about the application.
whats so special about android ?
unlike apples os, research in motion ( rim ) or microsoft windows mobile, google released android as an open-source os under the auspices of the open handset alliance. the other thing is the android os is very good, fast and strong and has an intuitive user interface that is packed with choice and flexibility. google continues to develop.
what are the different versions of android, such as donut, cupcake and froyo ?
just like apple ios, google android continue to up-date with new features. the latest version is android 2. 2 codenamed froyo adding features direct usb tethering, mobile hotspot functionality, and support for flash. this means flash video and flash module that does not work on the iphone will work on the android web browser. version 1. 6 is called donut which adds speed, increased screen resolution and camera and recorder applications faster. version 1. 5 cupcakes add video recorder.
if there is android 2. 2 why trapped version 2. 1 even 1. 6 ?
one weakness is so much provide a google android version. manufacturing and operators can decide to upgrade their phones to the latest version of android.
on the other hand, the iphone has a few versions in which to launch a new version of ios can be done at once and more easily.
how many apps are available for android ?
approximately 70 thousand more and continues to increase every day. google does not give special treatment on their application.
latest ponselandroid ?
there is the htc evo 4g. samsung will release android phones galaxy s-class : thin and lightweight, super amoled screen 4-inch high contrast, and is available on all major u. s. carriers. if looking for an android phone qwerty slide-out, consider verizon motorola droid 2 or samsung epic 4.
kelemahanandroid ?
concerning music and video android does not have an official media to desktop syncing client. however, in general, android gives you more options on how to set up your phone and its contents.
how to choose an android phone ?
android phones available with a variety of forms. do you want a qwerty phone or prefer the touchscreen ? looking for an easy big screen or enter the pocket ? are your users e-mail and text messages or interested in watching movies and videos on the big screen ? all questions were answered by the current android product
Saturday, 6 July 2013
Google hopes to attract developers with cloud-based back-end kit
Google wants more developers to use its App Engine cloud service, and has launched Mobile Backend Starter to make it easier.
Running servers on top of a hosted environment to power mobile apps can be a bit of a headache, according to Google. With the introduction of Mobile Backend Starter the company hopes to lower the bar for developers.
Mobile Backend Starter includes everything developers need to quickly set up a back end for their app, Google said in a blog post on Monday.The package includes a server that stores data using App Engine and a client library for Android that handles the communication between the app and the App Engine cloud.
Developers can also add support for Google Cloud Messaging (GCM). To keep users' data secure, Mobile Backend Starter also includes built-in support for Google Authentication, the company said. Features made possible with Mobile Backend Starter allow users to store data in the cloud and access it from anywhere. In addition, data updated on one device is automatically available on all devices via GCM.
Google has published relevant documentation on its developer website.
Cisco Still Number One for Data Center Security
We were excited to read the Infonetics Data Center Security Strategies and Vendor Leadership: North American Enterprise Survey, which was released yesterday. It revealed Cisco’s continued leadership in a market that spans a multitude of vendors – application/database, client, data center integration and network. The report indicates that leaders need to offer the right mix of products across the data center security and cloud arenas as well as demonstrate security efficacy and integration into adjacent markets. Cisco has continued to execute on a unified security portfolio spanning firewalls, Intrusion Prevention System (IPS), gateways, and integrated threat intelligence further complemented by strategic partnerships. Seamless integration and shared security intelligence with routing and switching (Nexus and Catalyst) and converged infrastructure (Cisco UCS) enables our customers to benefit from optimized traffic links, the highest levels of security resilience, increased availability and scalability as well as lower costs of ownership. Per the report, “to say you’re the leader in the data center/cloud security is to say you are an innovator who can tackle the biggest problems in IT security for the biggest and most demanding customers.”
We’d like to highlight two areas that Cisco has continued to demonstrate an outright lead over other vendors. In the area of perception as the top data center security supplier, Cisco leads with 47 percent of votes compared to IBM with 38 percent and McAfee with 28 percent, who ranked second and third. Cisco scored between 40 to 60 percent of respondents’ votes (covering 10 criteria) for being the leading data center security supplier with McAfee scoring 15 points below Cisco, HP received around 20 percent of votes, and Juniper and Trend with 15 percent.
Other notable findings include:
Respondents spent an average of US$14.6M on security products for the data center in 2012, growing to US$16.9M in 2013 (a 16 percent increase).
Seventy-nine percent of respondents indicated the most significant transformation affecting enterprise data centers today and driving the purchase of new security solutions is server virtualization, with the transition from private to public cloud, the need for performance and threat protection in an increasingly hostile landscape also ranking highly.
Sixty-nine percent of respondents indicated that the need to gain access to high-speed network interfaces is a key purchase driver.
More than half of the respondents plan to increase their spending in 10 different security technology areas over the next two years with antivirus/anti-malware topping the list.
The full Infonetics Report is available through subscription or through contacting Infonetics. It has some great criteria to keep in mind as you continue to make investments in data center security.
cisco certifications keep pace with business trends and employer needs
jeff is now a systems architect at snl monetary, a privately held data services company with over
1700 workers that rely on its worldwide communications network to supply data and analysis
to several a very large number customers. jeff plays a very important role within the continuing success of his company.
he's employed on each short-term technical “hard problem” troubleshooting, and medium-to long-term network
architecture planning to satisfy company performance, reliability, and accuracy goals.
moving forward, jeff sees his role as tracking network technology and computing trends like they develop and
integrating them into snl monetary systems. “years ago, a company’s it staff may dictate that an employee
may check their very own email just after logging into the company vpn given by a company-issued trusted laptop ;
those days are long-gone, ” jeff explains. “now, everybody wishes to be connected from any device whenever you like,
with complete flexibility. ” jeff will surely be within the forefront in planning and supporting that eye-sight of painless, secure,
worldwide connectivity to his company.
as for the long run, snl monetary has barely completed the growing a new data center. coincidentally, cisco has
announced a two-year, expert-level data center ccie track which can supply it professionals, like jeff, the opportunity
to follow and master implementing, operating, monitoring, and troubleshooting complicated data center networks
direct from targeted coaching.
Testing Development Software
Development Fuel: software testing in the large
by Adam Petersen and Seweryn Habdank-Wojewodzki, July 2012
Introduction
As soon as a software project grows beyond the hands of a single individual, the challenges of communication and collaboration arise. We must ensure that the right features are developed, that the product works reliably as a whole and that features interact smoothly. And all that within certain time constraints. These aspects combined place testing at the heart of any large-scale software project.
This article grew out of a series of discussions around the role and practice of tests between its authors. It's an attempt to share the opinions and lessons learned with the community. Consider this article more a collection of ideas and tips on different levels than a comprehensive guide to software testing. There certainly is more to it.
Keeping knowledge in the tests
We humans are social creatures. Yes, even we programmers. To some extent we have evolved to communicate efficiently with each other. So why is it so hard to deliver that killer app the customer has in mind? Perhaps it's simply because the level of detail present in our average design discussion is way beyond what the past millenniums of evolution required. We've gone from sticks and stones to multi-cores and CPU caches. Yet we handle modern technology with basically the same biological pre-requisites as our pre-historic ancestors. Much can be said about the human memory. It's truly fascinating. But one thing it certainly isn't is accurate. It's also hard to back-up and duplicate. That's where documentation comes in on complex tasks such as software. Instead of struggling to maintain repetitive test procedures in our heads, we suggest relaying on structured and automated test cases for recording knowledge. Done right, well-written test cases are an excellent communication tool that evolves together with the software during its whole life-cycle. Large-scale software projects have challenges of their own. Bugs due to unexpected feature interactions are quite common. Ultimately, such bugs are a failure of communication. The complexity in such bugs is often significant. Not at least since the domain knowledge needed to track down the bug is often spread across different teams and individuals. Again, recording that domain knowledge in test cases makes the knowledge accessible.
Levels of test
It's beneficial to consider testing at all levels in a software project. Different levels allow us to capture different aspects and levels of details. The overall goal is to catch errors as early as possible, preferably on the lowest possible level in the testing chain. But our division is more than a technical solution. It's a communication tool. The tests build on each other towards the user level. The higher up we get, the more we can involve non-technical roles in the conversation. Each level serves a distinct purpose:
Unit tests are driven by implicit design requirements. Unit tests are never directly mapped to formal requirements. This is the technically most challenging level. It's impossible to separate unit tests from design. Instead of fighting it, embrace it; unit tests are an excellent medium and opportunity for design. Unit tests are written solely by the developer responsible for a certain feature.
Integrations tests are where requirements and design meet. The purpose of integration tests is to find errors in interfaces and in the interaction between different units as early as possible. They are driven by use cases, but also by design knowledge.
System tests are the easiest ones to formulate. At least from a technical perspective. System tests are driven by requirements and user stories. We have found that most well-written suites are rather fine-grained. One requirement is typically tested by at least one test case. That is an important point to make; when something breaks, we want to know immediately what it was.
The big win: tap into testers' creativity by automating
Testing in the figure above refers to any kind of testing. Among them are exploratory and manual tests. These are the ones that could make a huge qualitative difference; it's under averse conditions that the true quality of any product is exposed.
Thus the purpose of this level of testing is to try to break the software, exploiting its weaknesses, typically by trying to find unexpected scenarios. It requires a different mindset found in good testers; just like design, testing is a creative process. And if we manage to get a solid foundation by automating steps 1-3 above, we get the possibility to spend more time in this phase. As we see it, that's one of the big selling-points of automated tests.
The challenges of test automation
The relative success of a test automation project goes well behind any technical solutions; test automation raises questions about the roles in a project. It's all too easy to make a mental difference regarding the quality of the production code and the test code. It's a classic mistake. The test code will follow the product during its whole life cycle and the same aspects of quality and maintainability should apply here. That's why it's important to have developers responsible for developing these tools and frameworks. Perhaps even most test cases in close collaboration with the testers.
There's one caveat here though; far too many organizations aren't shaped to deal with cross-disciplinary tasks. We often find that although the developers have the skills to write competent test frameworks and tools, they're often not officially responsible for testing. In social psychology there's a well-known phenomenon known as diffusion of responsibility [DOR]. Simply put, a single individual is less likely to take responsibility for an action (or inaction) when others are present. The problem increases with group size and has been demonstrated in a range of spectacular experiments and fateful real-world events.
The social game of large-scale software development is no exception. When an organization fails to adequately provide and assign responsibilities, we're often left with an unmaintainable mess of test scripts, simulators and utilities since the persons developing them aren't responsible for them; they're not even the users. These factors combined prevent the original developers from gaining valuable feedback. At the end, the product suffers along with the organization.
Changing a large organization is probably one of the hardest tasks in our modern corporate world. We're better advised to accept and mitigate the problem within the given constraints. One simple approach is to put focus on mature test environments and/or frameworks. Either a custom self maintained framework or one of the shelf. A QA Manager should consider investing into a testing framework, to discipline and speed up testing. Especially if there already are existing test cases that shall be re-run over and over again. Such a task is usually quite boring and therefore error prone. Automating it minimizes the risks of errors due to human boredom. That's a double win.
System level testing
System level testing refers to requirements[REQ], user stories and use cases. Reading and analysing requirements, user stories and use cases is a vital part in the preparation of test cases. Use cases are very close to test cases. However their focus is more on describing how the user interacts with the system rather than specifying the input and expected results. That said, when there are good use cases and good test cases, they tend to be very close to each other.
Preparing test cases - the requirements link
With increasing automation the line between development and testing gets blurred; writing automated test cases is a development activity. But when developers maintain the frameworks, what's the role of the tester?
Well, let's climb the software hill and discuss requirements first. Requirements shall be treated and understood as generic versions of use cases. It is hard to write good requirements, but it is important to have them to keep an eye on all general aspects of the product. Now, on the highest level test cases are derived directly and in-directly from the requirements. That makes the test cases place holders for knowledge. It's the communicating role of the test cases. Well-written test cases can be used as the basis for communication around requirements and features. Increasingly, it becomes the role of the tester to communicate with Business Analysts[AKB] and Product Managers[AKP].
Once a certain requirement or user story has been clarified, that feedback goes into the test cases.
The formulation of test cases is done in close collaboration with the test specialists on the team. The tester is responsible for deciding what to test; the developer is responsible for how. In that context, there are two common problems with requirements; they get problematic when they're either too strict or too fuzzy. The following sections will explain the details and cures.
Avoiding too strict requirements
Some requirements are simply too strict, too detailed. Let's consider the following simple example. We want to write a calculator, so we write a requirement that our product shall fulfil the following: 2 + 2 = 4, 3 * 5 = 15, 9 / 3 = 3. How many such requirements shall we write? On this level of detail there will be lots of them (let's say infinite...). Writing test cases will immediately show that the requirements are too detailed. There is no generic statement capturing the abstraction behind, specifying what really shall be done. There are 3 examples on input and output. In a pathological case of testing we will write exactly 3 test cases (copy paste from requirements) and reduce the calculator to the look-up table that contains exactly those 3 rows with values and operations as above. It' may be a trivial example but it expands to all computing.
Further, for scalability reasons it's important to limit the number of test cases with respect to their estimated static path count. One such technique is to introduce Equivalence Classes for input data by Equivalence Class Partitioning [ECP]. That will help to limit number of tests for interface testing. ECP will also guide in the organization of the test cases by and dividing them in normal operation test cases, corner cases and error situations.
Test data based on the ECP technique makes up an excellent base for data-driven tests. Data-driven tests are another example on separating the mechanism (i.e. the common flow of operations) from the stimulus it operates on (i.e. the input data). Such a diversion scales well and expresses the general case clearer as well.
Cures for fuzzy requirements
Clearly too strict requirements pose a problem. On the other side of the spectrum we got fuzzy requirements. Consider a requirement like: "during start everything shall be logged". On the development team we might very well understand the gist in some concrete way, but our customer may have a completely different interpretation. Simply asking the customer: "How will it be tested?" may go a long way towards converting that requirement to something like: "During application start-up any information might be possible to log in the logger. Where any information means: start of the main function of the application and all its plug-ins.".
How did our simple question to the customer helped us sort out the fuzziness in the requirement? First of all "every" was transformed to "any". To get the conversation going we could ask the user if he/she is interested in every information like the spin of the electrons in CPU. Writing test cases or discussing them with the user often give us his perspective. Often, the user considers different information useful for different purposes. Consider our definition of "any" information above. Here "any" for release 1.0 could imply logging the start of the main function and plug-ins. We see here that such a requirement does not limit the possible extensions for release 2.0.
The discussion also helped us in clarifying what "logged" really meant. From a testing point of view we now see that test shall consider the presence of the logger. And later requirements may precisely define the term logger and how the logs looks like. Again requirements about the shape of the logs shall be verified by proper test cases and by keeping the customer in the loop. Preparing the test case may guide the whole development team towards a very precise definition of logs.
Consider another real-world example from a product that one of the authors was involved in. The requirements for that product specified that transactions must be used for all operations in the database. That's clearly not something the user cares about (unless we are developing an API for a database...). It's a design issue. The real requirements would be something related to persistent information in the context of multiple, concurrent users and leave the technical issues to the design without specifying a solution.
One symptom of this problem is requirements that are hard to test. The example above ended up being verified by code inspection - hard to automate, and hard to change the implementation. Say we found something more useful than a relational database. As long as we provide the persistency needed, it would be perfectly fine for the end-user. But, such a design change would trigger a change in the requirements too.
Finally some words on Agile methodologies since they're common place these days. Agile approaches may help in test preparation as well as in defining the strategy, tools and writing test cases. The reason Agile methodologies may facilitate these aspects is indirect through the potentially improved communication within the project. But, the technical aspects remain to be solved independent of the actual methodology. Thus, all aspects of the software product has to be considered anyway from a test perspective; shipping, installation process, quality of documentation (which shall be specified in requirements as well) and so on.
Traceability
In safety-critical applications traceability is often a mandatory requirement in the regulatory process. We would like to stress that traceability is an important tool on any large-scale project. Done right, traceability is useful as a way to control the complexity, scale and progress of the development. By linking requirements to test cases we get an overview of the requirements coverage. Typically, each requirement is verified by one or more test cases. A requirement without test(s) is a warning flag; such a requirement is often useless, broken or simply too fuzzy.
From a practical perspective it's useful to have bi-directional links. Just like we should be able to trace a requirement to its test cases, the test cases should explain which requirement (or parts of it) it tests. Bi-directional traceability is of vital importance when preparing or generating test reports.
Such a link could be as simple as a comment or magical tag in each test case, it could be an entry in the test log, or the links could be maintained by one of the myriad of available tools for requirements tracing.
Design of test environments
Once we understand enough of the product to start sketching out designs we need to consider the test environments. As discussed earlier, we recommend testing on different complementary levels. With respect to the test environment, there may well be a certain overlap and synergies that allow parts to be shared and re-used across the different test levels. But once we start moving up from the solution domain of design (unit tests) towards the problem domain (system and acceptance tests), the interfaces change radically. For example, we may go from testing a programmatic API of one module with unit tests to a fully-fledged GUI for the end-user. Clearly, these different levels have radically different needs with respect to input stimulation, deployment and verification.
Test automation on GUI level
In large-scale projects automatic GUI tests are a necessity. The important thing is that the GUI automation is restricted to check the behaviour of the GUI itself. It's a common trap to try to test the underlying layers through the GUI (for example data access, business logic). Not only does it complicate the GUI tests and make the GUI design fragile to changes; it also makes it hard to inject errors in the software and simulate averse conditions.
However, there are valid cases for breaking this principle. One common case is when attempting to add automated tests to a legacy code base. No matter how well-designed the software is, there will be glitches with respect to test automation (e.g. lack of state inspection capabilities, tightly coupled layers, hidden interfaces, no possibility to stimulate the system, impossible to predictably inject errors). In this case, we've found it useful to record the existing behaviour as a suite of automated test cases. It may not capture every aspect of the software perfectly, but it's a valuable safety-net during re-design of the software.
The test cases used to get legacy code under test are usually not as well-factored as tests that evolve with the system during its development. The implication is that they tend to be more fragile and more inclined to change. The important point is to consider the tests as temporary in their nature; as the program under test becomes more testable, these initial tests should be removed or evolve into formal regression tests where each test cases captures one, specific responsibility of the system under test.
Integration defines error handling strategies
In large-scale software development one of the challenges is to ensure feature and interface compatibility between sub-systems and packages developed by different teams. It's of vital importance to get that feedback as early as possible, preferably on each committed code change. In this scope we need to design all tests to be sure that all possible connections are verified. The tests shall predict failures and test how one module will behave in case another other module fails. The reason is twofold.
First, it's in averse conditions that the real quality of any software is brutally exposed; we would be rich if given a penny for each Java stack trace we've seen in live systems on trains, airports, etc. Second, by focusing on inter-module failures we drive the development of an error handling strategy. And defining a common error handling policy is something that has to be done early on a multi-team software project. Error handling is classic example on cross-cutting functionality that cannot be considered locally.
Simulating the environment
Quite often we need to develop simulators and mock-ups as part of the test environment. Having or being able to have mock-ups will detect any lack of interfaces, especially when mock objects or modules has to be used instead of real ones. Further, simulators allow us to inject errors in the system that may be hard to provoke when using the real software modules.
Finally, a warning about mock objects based on hard-earned experience. With the increase in dynamic features in popular programming languages (reflection, etc) many teams tend to use a lot of mocks at the lower levels of test (unit and integration tests). That may be all well. Mocks may serve a purpose. The major problem we see is that mocks encourage interaction testing which tends to couple the test cases to a specific implementation. It's possible to avoid but any mock user should be aware of the potential problems.
Programming Languages for Testing
The different levels of tests introduced initially are pretty rough. Most projects will introduce more fine-grained levels. If we consider such more detailed layers of testing (e.g. acceptance testing, functional testing, production testing, unit testing) then except for unit testing, the most important part here is to separate the language used for testing from the development language. There are several reasons for this.
The development language is typically selected due to a range of constraints. These may be due to regulatory requirements in safety or medical domains, historical reasons, efficiency, or simply due to the availability of a certain technology on the target platform. In contrast, a testing language shall be as simple as possible. Further, by using different languages we enable cross-verification of the intent which may help in clarifying the details of the software under test. Developers responsible for supporting testing shall prepare high level routines that can be used by testers without harm for the tested software. It can be either commercial tools [LMT] or open sources [OMT].
When capturing test case we recommend using a formal language. In system or mission critical SW development there are formal processes built around standards like DO-178B and similar. In regular SW development using an automated testing framework forces developers to write test specifications in a dedicated high-level language. Most testing tools offer such support. This is important since formal language helps in the same way as normal source code. It can be verified, executed and is usually expressive in the test domain. If it is stored in plain text then comparison tools may help to check modifications and history. More advanced features are covered by Test Management tools.
TDD, unit tests and the missing link
A frequent discussion about unit tests concern their relationship to the requirements. Particularly in Test-Driven Development (TDD)[TDD] where the unit tests are used to drive the design of the software. With respect to TDD, The single most frequent question is: "how do I know the tests to write?" It's an interesting question. The concept of TDD seems to trigger something in peoples mind; something that the design process perhaps isn't deterministic. It particularly interesting since we rarely hear the question "how do I know what to program?" although it is exactly the same problem. As we answer something along the lines that design (as well as coding) always involves a certain amount of exploration and that TDD is just another tool for this exploration we get, probably with all rights, sceptical looks. The immediate follow-up question is: "but what about the requirements?" Yes, what about them? It's clear that they guide the development but should the unit tests be traced to requirements?
Requirements describe the "what" of software in the problem domain. And as we during the design move deeper and deeper into the solution domain, something dramatic happens. Our requirements explode. Robert L. Glass identifies requirements explosion as a fundamental fact of software development: "there is an explosion of "derived requirements" [..] caused by the complexity of the solution process" [GLA]. How dramatic is this explosion? Glass continues: "The list of these design requirements is often 50 times longer than the list of original requirements" [GLA]. It is requirements explosion that makes it unsuitable to map unit tests to requirements; in fact, many of the unit tests arise due to the "derived requirements" that do not even exist in the problem space!
Avoid test dependencies on implementation details
Most mainstream languages have some concept of private data. These could be methods and members in message-passing OO languages. Even the languages that lack direct language support for private data (e.g. Python, JavaScript) tend to have established idioms and conventions to communicate the intent. In the presence of short-term goals and deadlines, it may very well be tempting to write tests against such private implementation details. Obviously, there's a deeper issue with it; most testers and developers understand that it's the wrong approach.
Before discussing the fallacies associated with exposed implementation details, let's consider the purpose of data hiding and abstraction. Why do we encapsulate our data and who are we protecting it from? Well, it turns out that most of the time we're protecting our implementations from ourselves. When we leak details in a design we make it harder to change. At some point we've probably all seen code bases where what we expected to be a localized change turned out to involve lots of minor changes rippling through the code base. Encapsulation is an investment into the future. It allows future maintainers to change the how of the software without affecting the what.
With that in mind, we see that the actual mechanisms aren't that important; whether a convention or a language concept, the important thing is to realize and express the appropriate level of abstraction in our everyday minor design decisions.
Tests are no different. Even here, breaking the seal of encapsulation will have a negative impact on the maintainability and future life of the software. Not only will the tests be fragile since a change in implementation details may break the tests. Even the tests themselves will be hard to evolve since they now concern themselves with the actual implementation which should be abstracted away.
That said, it may well exist cases where a piece of software simply isn't testable without relaying on and inspecting private data. Such a case is actually a valuable feedback since it often highlights a design flaw; if something is hard to test we may have a design problem. And that design problem may manifest itself in other usage contexts later. As the typical first user of a module, the test cases are the messenger and we better listen to him. Each case requires a separate analysis, but we've often found one of the following flaws as root cause:
Important state is not exposed - perhaps we shall think about some state of the module or class that shall be exposed in a kind of invariant way (e.g. by COW, const).
Class/Module is complicated with overly strong coupling.
The interface is too poor to write essential test cases.
A proper bridge (or in C++ pimpl) pattern is not used to really hide private details that shall not be visible. In this case it's simply a failure of the API to communicate by separating the public from the hidden parts.
Coping with feedback
As a tester starts to write test cases expected to be run in an automated way he will usually detect anomalies, asymmetric patterns and deviations in the code. Provided coding and testing are executed reasonably parallel in time, this is valuable feedback to the developer. On a well-functioning team, the following information would typically flow back to the designers of the code:
Are there any missing interfaces? Or are there perhaps too many interfaces bloating the design?
Is it intended like that?
Is the SW conceptually consistent?
Are the differences between similar methods documented and clearly expressed?
Since the test cases typically are the first user of the software they are likely to run into other issues that have to be addressed earlier rather than becoming a maintenance cost. One prime example is the instantiation of individual software components and systems. The production and test code may have different needs here. In some cases, the test code has to develop mechanisms for its own unique needs, for example factory objects to instantiate the components under test. In that case, the tester will immediately detect flaws and complicated dependency chains.
Automatic test cases has another positive influence on the software design. When we want to automate efficiently, we will have to separate different responsibilities. This split is typically done based on layers where each layer takes us one step further towards the user domain. Examples on such layers include DB access, communication, business logic and GUI. Another typical example involves presenting different usage views, for example providing both a GUI and a CLI.
Filling data into classes or data containers
This topic brings many important design decision under consideration. Factories but in general construction of the SW is always tricky in terms of striking a balance between flexibility and safety. Let's consider a simple example class, Authentication. Let's assume the class contains two fields: login and password. If we will start to write test cases to check access using that class we could arrive at a table with the following test data: Authentication = {{A,B},{C,D},{E,F},{G,H},{I,J}}. If the class has two getters (login, password) and two setters (similar ones), it is very likely that we do not need to separate login and password. Changing login usually forces us to change password too. What about having two getters and one setter that takes two arguments and one constructor with two arguments? Seems to be good simplification. It means that by preparing the tests, we arrived at suggested improvements in the design of the class.
Gain feedback from code metrics
When testing against formal requirements the initial scope is rather fixed. By tracing the requirements to test cases we know the scope and extent of testing necessary. A more subjective weighting is needed on lower levels of test. Since unit tests (as discussed earlier) are written against implicit design requirements there's no clear test scope. How many tests shall we write?
Like so many other quality related tools, there's a point of diminishing return with unit tests. Even if we cover every corner of the code base with tests there's absolutely no guarantee that we get it right. There are just too many factors, too many possible ways different modules can interact with each other and too many ways the tests themselves may be broken. Instead, we recommend basing the decision on code metrics.
Calculating code metrics, in particular cyclomatic complexity and estimated static path count [KAC], may help us answer the question for a particular case. Code Complexity shows the minimal number of test actions or test cases that shall be considered. Estimate Static Path Count on the other hand shows a kind of maximal number (true maximal number is quite often infinity). It means that tools which calculates code metrics point to areas that need improvement as well as how to test the code. Basically, code metrics highlight parts of the code base that might be particularly tricky and may require extra attention. Note that these aspects are a good subject for automation. Automatic tests can be checked against coverage metrics and the code can be automatically checked with respect to cyclomatic complexity. Just don't forget to run the metrics on the test code itself; after all, it's going to evolve and live with the system too.
Summary
Test automation is a challenge. Automating software testing requires a project to focus on all areas of the development, from the high-level requirements down to the design of individual modules. Yet, technical solutions aren't enough; successful test automation requires a working communication and structured collaboration between a range of different roles on the project. This article has touched all those areas. While there's much more to write on the subject, we hope our brief coverage may serve as a starting-point and guide on your test automation tasks.
Oracle releases HTML5-focused Java EE 7
Oracle on Wednesday formally introduced its completed implementation of Java Platform, EE (Enterprise Edition) 7, focused on HTML5 applications, developer productivity, and enterprise demands. Developers can download the SDK for Java EE 7 on Oracle's website.
The enterprise-grade version of Java is primarily deployed on servers; EE 7 features include JSON (JavaScript Object Notation) support for data transfer, and WebSocket communications, both providing for HTML5 application development. "This is the ultimate platform for building HTML5 and mobile apps," said Cameron Purdy, vice president of development at Oracle, during Oracle's introduction of Java EE 7 via a Webcast.
"Java EE 7 brings this widely used enterprise framework to the modern age of HTML5 and also brings significant improvement in developer productivity that will have windfalls in code quality," said analyst Al Hilwa, of IDC. "In this age of the polyglot programmer, Java EE 7 will allow Java to remain one of the most widely deployed technologies for server applications on the planet."
JavaServer Faces 2.2 capabilities in Java EE 7 add "HTML5-friendly markup support," said Linda DeMichiel, Java EE specification lead. Batch programming capabilities, intended for long-running tasks, support enterprise-scale applications, while concurrency utilities provide higher throughput.
For developer productivity, EE 7 offers easier-to-use APIs, such as Java Message Service 2.0, DeMichiel said, and the simplified JMS API reduces the need for a lot of boilerplate code. Tooling support for Java EE 7 can be found in the NetBeans IDE and Eclipse.
For the most part, Oracle is deferring cloud capabilities in Java EE until the subsequent Java EE 8 release, although such areas as resource definition metadata are being addressed in EE 7 in relation to cloud computing.
Information Systems
Databases: Their Creation, Management and Utilization
Information systems are the software and hardware systems that support data-intensive applications. The journal Information Systemspublishes articles concerning the design and implementation of languages, data models, process models, algorithms, software and hardware for information systems.
Subject areas include data management issues as presented in the principal international database conferences (e.g. ACM SIGMOD, ACM PODS, VLDB, ICDE and ICDT/EDBT) as well as data-related issues from the fields of data mining, information retrieval, internet and cloud data management, web semantics, visual and audio information systems, scientific computing, and organisational behaviour. Implementation papers having to do with massively parallel data management, fault tolerance in practice, and special purpose hardware for data-intensive systems are also welcome.
All papers should motivate the problems they address with compelling examples from real or potential applications. Systems papers must be serious about experimentation either on real systems or simulations based on traces from real systems. Papers from industrial organisations are welcome.
Theoretical papers should have a clear motivation from applications. They should either break significant new ground or unify and extend existing algorithms. Such papers should clearly state which ideas have potentially wide applicability.
In addition to publishing submitted articles, the Editors-in-Chief will invite retrospective articles that describe significant projects by the principal architects of those projects. Authors of such articles should write in the first person, tracing the social as well as technical history of their projects, describing the evolution of ideas, mistakes made, and reality tests.
Technical results should be explained in a uniform notation with the emphasis on clarity and on ideas that may have applications outside of the environment of that research. Particularly complex details may be summarised with references to previously published papers.
We will make every effort to allow authors the right to republish papers appearing in Information Systems in their own books and monographs.
Software Engineering
Software engineering (SE) is concerned with developing and maintaining software systems that behave reliably and efficiently, are affordable to develop and maintain, and satisfy all the requirements that customers have defined for them. It is important because of the impact of large, expensive software systems and the role of software in safety-critical applications. It integrates significant mathematics, computer science and practices whose origins are in engineering.
Students can find software engineering in two contexts: computer science programs offering one or more software engineering courses as elements of the CS curriculum, and in separate software engineering programs. Degree programs in computer science and in software engineering tend to have many courses in common; however, as of Spring 2006 there are few SE programs at the bachelor’s level. Software engineering focuses on software development and goes beyond programming to include such things as eliciting customers’ requirements, and designing and testing software. SE students learn how to assess customer needs and develop usable software that meets those needs.
Both computer science and software engineering curricula typically require a foundation in programming fundamentals and basic computer science theory. They diverge in their focus beyond these core elements. Computer science programs tend to keep the core small and then expect students to choose among more advanced courses (such as systems, networking, database, artificial intelligence, theory, etc.). In contrast, SE programs generally expect students to focus on a range of topics that are essential to the SE agenda (problem modeling and analysis, software design, software verification and validation, software quality, software process, software management, etc.). While both CS and SE programs typically require students to experience team project activity, SE programs tend to involve the students in significantly more of it, as effective team processes are essential to effective SE practices. In addition, a key requirement specified by the SE curriculum guidelines is that SE students should learn how to build software that is genuinely useful and usable by the customer and satisfies all the requirements defined for it.
Most people who now function in the U.S. as serious software engineers have degrees in computer science, not in software engineering. In large part this is because computer degrees have been widely available for more than 30 years and software engineering degrees have not. Positions that require development of large software systems often list “Software Engineer” as the position title. Graduates of computer science, computer engineering, and software engineering programs are good candidates for those positions, with the amount of software engineering study in the programs determining the suitability of that graduate for such a position.
Most IT professionals who have computing degrees come from CS or IS programs. It is far too soon for someone who wants to work as a software engineer or as an information technology practitioner to be afraid that they won’t have a chance if they don’t graduate from a degree program in one of the new disciplines. In general, a CS degree from a respected program is the most flexible of degrees and can open doors into the professional worlds of CS, SE, IT, and sometimes CE. A degree from a respected IS program allows entry to both IS and IT careers.
Media attention to outsourcing, offshoring, and job migration has caused many to be concerned about the future of computing-related careers. It is beyond the scope of this web site to address these issues. The report of the British Computer Society addresses these issues as they impact the U.K. The Globalization Report of the ACM Job Migration Task Force reflects an international perspective, not just a U.S-centric one.
huge java update won’t get oracle from attacker’s crosshairs, microsoft offering bounties for vulnerabilities and a lot of
listed here are the highest cyber news and stories of one's day.
trojan uses fake adobe certificate – one new section of malware that's been discovered is pretending to possess a certificate from adobe systems to trick users. the software injects itself into ie and notepad and allows the handler taking management of one's infected machine. this use of fake certificates could be a sign of ways to return, as a result of it could lull users towards a false sense of security. via iss supply, a lot of here.
large java update won’t get oracle from attacker’s crosshairs – oracle recently released 40 updates onto the java software, hoping to shore up their much maligned product. but, consistent with a few analysts, the software will certainly be continuously targeted owing to its cross platform ubiquity. this makes vulnerabilities within the java software particularly useful to malware creators and controllers. oracle has additionally been slow to patch these vulnerabilities, that simply encourages attackers more. via computerworld, a lot of here.
several corporations are negligent about sap security, researchers say – sap technologies are typically chargeable for essential business processes. whereas sap has actually been diligently pumping out enhanced security patches, several corporations haven't been applying these patches. patch management is one thing that is relatively simple there is to firmly do, other then while not it, the entire agency often is place at risk. via computerworld, a lot of here.
hagel discusses ‘state of dod’ in nebraska speech – whenever the secretary of defense recently spoke along at the university of nebraska, he spoke in nice length concerning the changes occurring in dod. he mentioned, “the role of technology in closely linking the world’s individuals and the aspirations and economies” which, “in the face area of rapidly developing and interconnected new threats like cyber that fundamentally alter the face of future conflicts, hagel aforesaid, the military should reset issued from a defense enterprise structure that also reflects its cold war design. ” via fort campbell courier, a lot of here.
microsoft offering hackers $1mln for finding bugs in windows – java may function as merely software a lot of ubiquitous than windows, although it's still on many vari machines across the globe. each vulnerability is valued at up to your million, and therefore the remediations are additionally valuable. by incentivizing hackers, microsoft may begin to lessen the sisyphean task of securing the vari lines of code that make up windows. via yahoo ! finance, a lot of here.
65+ internet sites compromised to produce malvertising – “at least sixty five totally different sites serving ads that ultimately led to malware are noticed by zscaler researchers. ” this is often changing into a favored vector of attack. by compromising one server, they actually will reach thousands or vari clicks, all who will then be click-jacked. a style of sites were afflicted, together with government security news. via help net security, a lot of here.
About Java
By David Reilly
Java - an island of Indonesia, a type of coffee, and a programming language. Three very different meanings, each in varying degrees of importance. Most programmers, though, are interested in the Java programming language. In just a few short years (since late 1995), Java has taken the software community by storm. Its phenomenal success has made Java the fastest growing programming language ever. There's plenty of hype about Java, and what it can do. Many programmers, and end-users, are confused about exactly what it is, and what Java offers.
Java is a revolutionary language
The properties that make Java so attractive are present in other programming languages. Many languages are ideally suited for certain types of applications, even more so than Java. But Java brings all these properties together, in one language. This is a revolutionary jump forward for the software industry.
Object-oriented
Many older languages, like C and Pascal, were procedural languages. Procedures (also called functions) were blocks of code that were part of a module or application. Procedures passed parameters (primitive data types like integers, characters, strings, and floating point numbers). Code was treated separately to data. You had to pass around data structures, and procedures could easily modify their contents. This was a source of problems, as parts of a program could have unforeseen effects in other parts. Tracking down which procedure was at fault wasted a great deal of time and effort, particularly with large programs.
In some procedural language, you could even obtain the memory location of a data structure. Armed with this location, you could read and write to the data at a later time, or accidentally overwrite the contents.
Java is an object-oriented language. An object-oriented language deals with objects. Objects contain both data (member variables) and code (methods). Each object belongs to a particular class, which is a blueprint describing the member variables and methods an object offers. In Java, almost every variable is an object of some type or another - even strings. Object-oriented programming requires a different way of thinking, but is a better way to design software than procedural programming.
There are many popular object-oriented languages available today. Some like Smalltalk and Java are designed from the beginning to be object-oriented. Others, like C++, are partially object-oriented, and partially procedural. In C++, you can still overwrite the contents of data structures and objects, causing the application to crash. Thankfully, Java prohibits direct access to memory contents, leading to a more robust system.
Portable
Most programming languages are designed for a specific operating system and processor architecture. When source code (the instructions that make up a program) are compiled, it is converted to machine code which can be executed only on one type of machine. This process produces native code, which is extremely fast.
Another type of language is one that is interpreted. Interpreted code is read by a software application (the interpreter), which performs the specified actions. Interpreted code often doesn't need to be compiled - it is translated as it is run. For this reason, interpreted code is quite slow, but often portable across different operating systems and processor architectures.
Java takes the best of both techniques. Java code is compiled into a platform-neutral machine code, which is called Java bytecode. A special type of interpreter, known as a Java Virtual Machine (JVM), reads the bytecode, and processes it. Figure One shows a disassembly of a small Java application. The bytecode, indicated by the arrow, is represented in text form here, but when compiled it is represented as bytes to conserve space.
Figure One - Bytecode disassembly for "HelloWorld"
The approach Java takes offers some big advantages over other interpreted languages. Firstly, the source code is protected from view and modification - only the bytecode needs to be made available to users. Secondly, security mechanisms can scan bytecode for signs of modification or harmful code, complimenting the other security mechanisms of Java. Most of all though, it means that Java code can be compiled once, and run on any machine and operating system combination that supports a Java Virtual Machine (JVM). Java can run on Unix, Windows, Macintosh, and even the Palm Pilot. Java can even run inside a web browser, or a web server. Being portable means that the application only has to be written once - and can then execute on a wider range of machines. This saves a lot of time, and money.
Multi-threaded
If you've ever written complex applications in C, or PERL, you'll probably have come across the concept of multiple processes before. An application can split itself into separate copies, which run concurrently. Each copy replicates code and data, resulting in increased memory consumption. Getting the copies to talk together can be complex, and frustrating. Creating each process involves a call to the operating system, which consumes extra CPU time as well.
A better model is to use multiple threads of execution, referred to as threads for short. Threads can share data and code, making it easier to share data between thread instances. They also use less memory and CPU overhead. Some languages, like C++, have support for threads, but they are complex to use. Java has support for multiple threads of execution built right into the language. Threads require a different way of thinking, but can be understood very quickly. Thread support in Java is very simple to use, and the use of threads in applications and applets is quite commonplace.
Automatic garbage collection
No, we're not talking about taking out the trash (though a computer that could literally do that would be kind of neat). The term garbage collection refers to the reclamation of unused memory space. When applications create objects, the JVM allocates memory space for their storage. When the object is no longer needed (no reference to the object exists), the memory space can be reclaimed for later use.
Languages like C++ force programmers to allocate and deallocate memory for data and objects manually. This adds extra complexity, but also causes another problem - memory leaks. When programmers forget to deallocate memory, the amount of free memory available is decreased. Programs that frequently create and destroy objects may eventually find that there is no memory left. In Java, the programmer is free from such worries, as the JVM will perform automatic garbage collection of objects.
Secure
Security is a big issue with Java. Since Java applets are downloaded remotely, and executed in a browser, security is of great concern. We wouldn't want applets reading our personal documents, deleting files, or causing mischief. At the API level, there are strong security restrictions on file and network access for applets, as well as support for digital signatures to verify the integrity of downloaded code. At the bytecode level, checks are made for obvious hacks, such as stack manipulation or invalid bytecode. The strong security mechanisms in Java help to protect against inadvertent or intentional security violations, but it is important to remember that no system is perfect. The weakest link in the chain is the Java Virtual Machine on which it is run - a JVM with known security weaknesses can be prone to attack. It is also worth noting that while there have been a few identified weaknesses in JVMs, they are rare, and usually fixed quickly.
Network and "Internet" aware
Java was designed to be "Internet" aware, and to support network programming. The Java API provides extensive network support, from sockets and IP addresses, to URLs and HTTP. It's extremely easy to write network applications in Java, and the code is completely portable between platforms. In languages like C/C++, the networking code must be re-written for different operating systems, and is usually more complex. The networking support of Java saves a lot of time, and effort.
Java also includes support for more exotic network programming, such as remote-method invocation (RMI), CORBA and Jini. These distributed systems technologies make Java an attractive choice for large distributed systems.
Simplicity and ease-of-use
Java draws its roots from the C++ language. C++ is widely used, and very popular. Yet it is regarded as a complex language, with features like multiple-inheritance, templates and pointers that are counter-productive. Java, on the other hand, is closer to a "pure" object-oriented language. Access to memory pointers is removed, and object-references are used instead. Support for multiple-inheritance has been removed, which lends itself to clearer and simpler class designs. The I/O and network library is very easy to use, and the Java API provides developers with lots of time-saving code (such as networking and data-structures). After using Java for awhile, most developers are reluctant to return to other languages, because of the simplicity and elegance of Java.
Summary
Java provides developers with many advantages. While most of these are present in other languages, Java combines all of these together into one language. The rapid growth of Java has been nothing short of phenomenal, and shows no signs (yet!) of slowing down. In next month's column, I'll talk more about the heart of Java - the Java Virtual Machine.
How C Programming Works
the c programming language is incredibly fashionable, and its straightforward to discover why. programming in c is efficient and provides the programmer nice deal'>loads of management. several different programming languages like c++, java and python were developed using c.
chances are increasing every day if youre a programmer, you wont use c completely specifically for your own personal work. though, there will be many learning c is highly beneficial, even if you do in fact dont utilize it often. heres why :
youll be ready to scan and write code for software that may be applied on several totally different kinds of laptop platforms, together with everything from small microcontrollers to desktop, laptop and mobile operating systems.
youll higher perceive what high-level languages are going to firmly do behind the scenes, an example would be memory management and garbage collection. this understanding will support you write programs that work additional efficiently.
if youre an data technology ( it ) specialist, you may conjointly enjoy learning c. it professionals usually write, maintain and run scripts as half of the job. a script may be a list of directions obtain a computers operating system to follow. to try bound scripts, the laptop sets up a controlled execution environment referred to as a shell. since most operating systems run shells based mostly on c, c shell may be a fashionable scripting adaptation of c used because of it pros.
this article covers the history behind c, appearance at why c is thus vital, shows examples of a basic c code and explores a few vital features of c, together with data types, operations, functions, pointers and memory management. though this article isnt an instruction manual for programming in c, it's role is cover what makes c programming unique within the manner that goes beyond those first few chapters on your average c programming guide.
lets begin by watching in which the c programming language came from, how it's developed and of course the role it's in software development nowadays.
How HTML5 Works
hypertext markup language ( html ) have been a core technology for the net since the first nineties. tim berners-lee created html in 1989 as an easy however effective procedure to encode electronic documents. the fact is, the original purpose the most web browser was to serve currently being a reader for such documents. twenty years later, the browser itself turned out to firmly be a portal to some realm of on-line media. thats why html5 isnt simply another html revision, however a comprehensive customary for how web pages work.
to higher perceive what makes html5 unique, lets flip the clock back alittle. in 1994, html was still in its first revision, mosaic and netscape dominated the browser market, and the vast majority folks had however to experience this new factor referred to firmly as world wide web. that year, html creator berners-lee headed a newly established web standards group referred to firmly as world wide web consortium ( w3c ).
though w3c may be a respected standards authority nowadays, the industrial players within the nineties browser market largely ignored those standards and blazed their own personal paths. by 1995, w3c had printed the second revision of one's html customary, and web newcomer microsoft was gaining ground with its internet explorer ( ie ) browser. microsoft largely ignored standards, and netscape struggled to take care of a respectable market share whereas ie began to dominate supply : harris.
throughout these early browser wars, web developers were challenged to run sites compatible with every new unleash of one's major browsers additionally as in the lesser-used opera and apple safari browsers. even if w3c had printed html 3. 2 in 1997, followed by html 4 in 1998, following the standards appeared less necessary than keeping up with browser-specific features. this went on till 2003 as soon as the community-driven mozilla foundation broke the trend. after its original mozilla browser unleash, followed by its firefox browser in 2004, the mozilla quickly hacked away at ies dominance. additionally, these new browsers truly followed existing w3c standards whereas accomplishing the objective.
whereas mozillas firefox continued growing by using the aging html 4 customary, mozilla joined apple and opera in 2004 to type a gaggle referred to firmly as web hypertext application technology operating group ( whatwg ). the intention of whatwg usually is to keep html development alive. though it originally hesitated, w3c joined the html revival in 2006. along, whatwg and w3c combined existing specifications for html and xhtml and additional developed them to firmly be able to produce the new html5 specification. that specification is currently maintained and printed by w3c supply : w3c, whatwg.
this article explores this new html5 technology. well inspect xhtml and different technologies who have gone into html5 and canopy the basic points of learn how to use html5 to make engaging, standards-compliant web content. well additionally check out a few exciting ways a person using html5 inside the web. lets begin by viewing the goals of html5 and why its equally than simply html.
Object Oriented Programming
Object Oriented Programming
Thinking Object-Oriented
Real Power
One of the more difficult jumps of the several leaps and bounds from procedural to object-oriented programming is conceptualizing the system you are programming. Thinking object-oriented is the real power of object-oriented programming, and even if we always program procedurally we may already be thinking object-oriented. We need to be able to see a system, a problem, or a program in terms of objects and their relationships to one another. When we do this any program will be much more simple and easy for us to grasp. It will be easy to picture how it works in our minds.
There are three ways to think object-oriented. One way it not better then another. They are all just different and each of them is used as we start use objects appropriately. I call these three ways the Top Down approach, the Bottom Up approach, and Real World Modeling.
Top Down
The Top Down approach in object-oriented programming give us a way to see the system as a whole. We break the entire system down into related parts. The Model-View-Controller design pattern (we’ll talk about design patterns in a later article) is an example of this. It breaks the system into three parts: objects related to the business model, display objects, and objects to handle user input.
When we use the Top Down approach we need to think of how the system can be broken up into logical pieces. We can slice up the entire system like I do a pie. First I slice the pie in half, then in half again, making four pieces, then in half again. I end up with nice evenly sized pieces, something I’ve never been able to do by slicing a piece at a time. Similarly with a program or system (which might be a small part of a larger system) we break it up into larger pieces, then smaller ones, then even smaller ones, until we have pieces of a manageable size.
We can see the Top Down approach thinking with a short example. Let’s use the oh-so-common ecommerce store. If we think of the system as a whole we know there will be data to keep, store functionality, and a good looking design that we want to change once in awhile. We can break our data section down into session management and a database object. We might break our store functionality into sections like the catalog, product reviews, the shopping cart, and the checkout. We can have our display section broken down into a general template and a caching template that will cache pages for reuse (so the database doesn’t have to work so hard). We can then go over each of the functionality sections and further break them down into small pieces and objects.
Bottom Up
The Bottom Up approach does not look at the system as a whole. Instead, it concentrates on grouping related functionality together. We see functions and variables as related and so put them together into an object. Then we see objects which interact or are related and put them together into a group (called a package, usually put into the same folder etc.) We can then group those object groups into larger and larger groups until we have the whole system accounted for.
Using this approach to think object-oriented we use two different steps. The first step is grouping functionality into objects. The second step is grouping objects into groups. For step one, we figure out what goes together. This can be easy if we already group our related procedural code into files. We may have a file that contains all of our database functions in it. Instead of having all the functions there sitting in the global scope and the variables they use (perhaps a connection variable, or a record-set variable), we can group it all into a database object, with properties such as the connection reference and functions such as query or closeConnection.
The second step is similar to the Top Down approach, just backwards. Instead of breaking a system down into parts, we are building it up from the pieces we know we will have. Grouping objects into larger and larger groups until we have the whole system makes it much easier to picture the internals of a system in our mind, and how things are actually working. It also allows us and others to easily find pieces of functionality to extend or fix because they are organized into folders and files (packages and objects) in a logical manner.
Model the Real World
Finally, Real World Modeling is one of the most powerful aspects of object-oriented programming. This is making objects that you know in the real world into objects in your system. Because we all live in the real world, it is easy for others to immediately understand what an object is and does if it is based off of something real world.
A customer, shopping cart, and product in an ecommerce store are immediately recognized and their interactions between each other are understood easily, because we can see the same interaction in our daily lives. We know that there can be many products in our shopping cart. We know that each customer may have a shopping cart to put their products in. We know that customers have names and addresses, and that products have names and prices. The programmatic shopping cart however may have related functionality that a real shopping cart may not have. It may be able to tell us the total price of all the products in it, where a real cart can’t do anything but roll around (something not needed in an ecommerce store).
The first thing you might do when developing a system is to discover all the real world objects in that system. Whether it is employees, supplies, offices, text characters, documents, buttons, or trash cans, they all help us think of the system easier and model it better. We already know many of the objects’ interactions and who “owns” what (the customer owns the shopping cart and the cart owns the products). These real world objects may even be the groupings of objects instead of the objects themselves. Or they may be objects that help group others. You might have a department object with its employees and manager.
Once you have the real world objects you can then group them. The shopping cart, order, and checkout might go into the same section. You may put products and categories into the catalog group. Customers and store admin may be their own group.
Object-Oriented Thinking
Object-oriented thinking has been around even before object-oriented programming. People do it without knowing it might be called object-oriented. It helps us conceptualize a system and better grasp it. It helps us wrap our mind around a system without blowing a fuse. It makes programming easier for us and easier for others coming to our code. Object-oriented programming was created to make it easy to transfer our object-oriented thinking into code, although we can still program procedurally our object-oriented design. This is the real power of object-oriented programming.
Subscribe to:
Posts (Atom)