Tuesday, August 6, 2019

Exclusive Post for ACTL Training (Cloud Computing)

Cloud computing is the delivery of computing services - including servers, storage, databases, networking, software, analytics, and intelligence over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.

Top benefits of cloud computing

Cloud computing is a big shift from the traditional way businesses think about IT resources. Here are seven common reasons organizations are turning to cloud computing services:


Types of Cloud Computing

Not all clouds are the same and not one type of cloud computing is right for everyone. Several different models, types and services have evolved to help offer the right solution for your needs.

Public cloud

Public clouds are owned and operated by a third-party cloud service providers, which deliver their computing resources like servers and storage over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser.

Private cloud

A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site data center. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.

Hybrid cloud

Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options and helps optimise your existing infrastructure, security and compliance.

Types of cloud services: IaaS, PaaS, serverless and SaaS

Most cloud computing services fall into four broad categories: infrastructure as a service (IaaS), platform as a service (PaaS), serverless and software as a service (SaaS). These are sometimes called the cloud computing stack because they build on top of one another. Knowing what they are and how they are different makes it easier to accomplish your business goals.

Infrastructure as a service (IaaS)

The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis

Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network and databases needed for development.

Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs.

Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet or PC.

Uses of cloud computing

You are probably using cloud computing right now, even if you don’t realize it. If you use an online service to send email, edit documents, watch movies or TV, listen to music, play games or store pictures and other files, it is likely that cloud computing is making it all possible behind the scenes. The first cloud computing services are barely a decade old, but already a variety of organizations from tiny startups to global corporations, government agencies to non-profits—are embracing the technology for all sorts of reasons.

Here are a few examples of what is possible today with cloud services from a cloud provider:

Create cloud-native applications
Test and build applications
Store, back up and recover data
Analyze data
Stream audio and video
Deliver software on demand

The End

Tuesday, February 7, 2012

Windows Server 2008 R2 Cluster Terminology

The following list contains the many terms associated with Windows

Server 2008 R2 clustering technologies:

Cluster—A cluster is a group of independent servers (nodes) that are accessed and

presented to the network as a single system.

Node—A node is an individual server that is a member of a cluster.

Cluster resource—A cluster resource is a service, application, IP address, disk, or

network name defined and managed by the cluster Within a cluster, cluster

resources are grouped and managed together using cluster resource groups, now

known as Services and Applications groups.

Services and Applications group—Cluster resources are contained within a cluster

in a logical set called a Services and Applications group or historically referred to as a

cluster group. Services and Applications groups are the units of failover within the

cluster. When a cluster resource fails and cannot be restarted automatically, the

Services and Applications group this resource is a part of will be taken offline, moved

to another node in the cluster, and the group will be brought back online.

Client Access Point—A Client Access Point is a term used in Windows Server 2008

R2 failover clusters that represents the combination of a network name and associated

IP address resource. By default, when a new Services and Applications group is

defined, a Client Access Point is created with a name and an IPv4 address. IPv6 is

supported in failover clusters but an IPv6 resource either needs to be added to an

existing group or a generic Services and Applications group needs to be created with

the necessary resources and resource dependencies.

Virtual cluster server—A virtual cluster server is a Services or Applications group

that contains a Client Access Point, a disk resource, and at least one additional

service or application-specific resource. Virtual cluster server resources are accessed

either by the domain name system (DNS) name or a NetBIOS name that references

an IPv4 or IPv6 address. A virtual cluster server can in some cases also be directly

accessed using the IPv4 or IPv6 address. The name and IP address remain the same

regardless of which cluster node the virtual server is running on.

Active node—An active node is a node in the cluster that is currently running at

least one Services and Applications group. A Services and Applications group can

only be active on one node at a time and all other nodes that can host the group are

considered passive for that particular group.

Passive node—A passive node is a node in the cluster that is currently not running

any Services and Applications groups.

Active/passive cluster—An active/passive cluster is a cluster that has at least one

node running a Services and Applications group and additional nodes the group can

be hosted on, but are currently in a waiting state. This is a typical configuration

when only a single Services and Applications group is deployed on a failover cluster.

Active/active cluster—An active/active cluster is a cluster in which each node is

actively hosting or running at least one Services and Applications group. This is a

typical configuration when multiple groups are deployed on a single failover cluster

to maximize server or system usage. The downside is that when an active system

fails, the remaining system or systems need to host all of the groups and provide the

services and/or applications on the cluster to all necessary clients.

Cluster heartbeat—The cluster heartbeat is a term used to represent the communication

that is kept between individual cluster nodes that is used to determine node

status. Heartbeat communication can occur on a designated network but is also

performed on the same network as client communication. Due to this internode

communication, network monitoring software and network administrators should

be forewarned of the amount of network chatter between the cluster nodes. The

amount of traffic that is generated by heartbeat communication is not large based

on the size of the data but the frequency of the communication might ring some

network alarm bells.

Cluster quorum—The cluster quorum maintains the definitive cluster configuration

data and the current state of each node, each Services and Applications group,

and each resource and network in the cluster. Furthermore, when each node reads

the quorum data, depending on the information retrieved, the node determines if it

should remain available, shut down the cluster, or activate any particular Services

and Applications groups on the local node. To extend this even further, failover clusters

can be configured to use one of four different cluster quorum models and essentially

the quorum type chosen for a cluster defines the cluster. For example, a cluster

that utilizes the Node and Disk Majority Quorum can be called a Node and Disk

Majority cluster.

Cluster witness disk or file share—The cluster witness or the witness file share are

used to store the cluster configuration information and to help determine the state

of the cluster when some, if not all, of the cluster nodes cannot be contacted.

Generic cluster resources—Generic cluster resources were created to define and

add new or undefined services, applications, or scripts that are not already included

as available cluster resources. Adding a custom resource provides the ability for that

resource to be failed over between cluster nodes when another resource in the same

Services and Applications group fails. Also, when the group the custom resource is a

member of moves to a different node, the custom resource will follow. One disadvantage

or lack of functionality with custom resources is that the Failover Clustering

feature cannot actively monitor the resource and, therefore, cannot provide the

same level of resilience and recoverability as with predefined cluster resources.

Generic cluster resources include the generic application, generic script, and generic

service resource.

Shared storage—Shared storage is a term used to represent the disks and volumes

presented to the Windows Server 2008 R2 cluster nodes as LUNs. In particular,

shared storage can be accessed by each node on the cluster, but not simultaneously.

Cluster Shared Volumes—A Cluster Shared Volume is a disk or LUN defined

within the cluster that can be accessed by multiple nodes in the cluster simultaneously.

This is unlike any other cluster volume, which normally can only be accessed

by one node at a time, and currently the Cluster Shared Volume feature is only used

on Hyper-V clusters but its usage will be extended in the near future to any failover

cluster that will support live migration.

LUN—LUN stands for Logical Unit Number. A LUN is used to identify a disk or a

disk volume that is presented to a host server or multiple hosts by a shared storage

array or a SAN. LUNs provided by shared storage arrays and SANs must meet many

requirements before they can be used with failover clusters but when they do, all

active nodes in the cluster must have exclusive access to these LUNs.

Failover—Failover is the process of a Services and Applications group moving from

the current active node to another available node in the cluster when a cluster

resource fails. Failover occurs when a server becomes unavailable or when a resource

in the cluster group fails and cannot recover within the failure threshold.

Failback—Failback is the process of a cluster group automatically moving back to a

preferred node after the preferred node resumes operation. Failback is a nondefault

configuration that can be enabled within the properties of a Services and

Applications group. The cluster group must have a preferred node defined and a failback

threshold defined as well, for failback to function. A preferred node is the node

you would like your cluster group to be running or hosted on during regular cluster

operation when all cluster nodes are available. When a group is failing back, the

cluster is performing the same failover operation but is triggered by the preferred

node rejoining or resuming cluster operation instead of by a resource failure on the

currently active node.

Live Migration—Live Migration is a new feature of Hyper-V that is enabled when

Virtual Machines are deployed on a Windows Server 2008 R2 failover cluster. Live

Migration enables Hyper-V virtual machines on the failover cluster to be moved

between cluster nodes without disrupting communication or access to the virtual

machine. Live Migration utilizes a Cluster Shared Volume that is accessed by all

nodes in the group simultaneously and it transfers the memory between the nodes

during active client communication to maintain availability. Live Migration is

currently only used with Hyper-V failover clusters but will most likely extend to

many other Microsoft services and applications in the near future.

Quick Migration—With Hyper-V virtual machines on failover clusters, Quick

Migration provides the option for failover cluster administrators to move the virtual

machine to another node without shutting the virtual machine off. This utilizes the

virtual machine’s shutdown settings options and if set to Save, the default setting,

performing a Quick Migration will save the current memory state, move the virtual

machine to the desired node, and resume operation shortly. End users should only

encounter a short disruption in service and should reconnect without issue depending

on the service or application hosted within that virtual machine. Quick

Migration does not require Cluster Shared Volumes to function.

Geographically dispersed clusters—These are clusters that span physical locations

and sometimes networks to provide failover functionality in remote buildings and

data centers, usually across a WAN link. These clusters can now span different

networks and can provide failover functionality, but network response and throughput

must be good and data replication is not handled by the cluster.

Multisite cluster—Geographically dispersed clusters are commonly referred to as

multisite clusters as cluster nodes are deployed in different Active Directory sites.

Multisite clusters can provide access to resources across a WAN and can support

automatic failover of Services and Applications groups defined within the cluster.

Stretch clusters—A stretch cluster is a common term that, in some cases, refers to

geographically dispersed clusters in which different subnets are used but each of the

subnets is part of the same Active Directory site—hence, the term stretch, as in

stretching the AD site across the WAN. In other cases, this term is used to describe

a geographically dispersed cluster, as in the cluster stretches between geographic


Thursday, May 26, 2011

Windows Server 2008 Boot Process!!!

Here’s the brief description of Windows Server 2008 Boot process.
  1. System is powered on
  2. The CMOS loads the BIOS and then runs POST
  3. Looks for the MBR on the bootable device
  4. Through the MBR the boot sector is located and the BOOTMGR is loaded
  5. BOOTMGR looks for active partition
  6. BOOTMGR reads the BCD file from the \boot directory on the active partition
  7. The BCD (boot configuration database) contains various configuration parameters( this information was previously stored in the boot.ini)
  8. BOOTMGR transfer control to the Windows Loader (winload.exe) or winresume.exe in case the system was hibernated.
  9. Winloader loads drivers that are set to start at boot and then transfers the control to the windows kernel.
Here’re some articles for your reference.
Boot Process and BCDEdit
Server 2008 Boot Process – Making a boot disk
Windows Server 2008: Startup Processes and Delayed Automatic Start

Remote Server Administration Tools for Windows 7 with SP1

I tried installing Remote Server Administration Tools for Windows 7 on my freshly installed Windows 7 with integrated SP1 when I got “update is not applicable to your computer” error message.

Here’s what Microsoft has to say about that

**Remote Server Administration Tools for Windows 7 can be installed ONLY on computers that are running the Enterprise, Professional, or Ultimate editions of Windows 7. This software CANNOT BE INSTALLED on computers that are running Windows 7 with Service Pack 1 (SP1). To run Remote Server Administration Tools for Windows 7 on a computer on which you want to run Windows 7 with SP1, first install Remote Server Administration Tools, and then upgrade to Service Pack 1.**

Nice, I have Windows 7 Professional with Integrated SP1… I found a workaround on Microsoft TechNet

  1. Create RSAT folder on C drive and copy amd64fre_GRMRSATX_MSU.msu (x64 version)
  2. Run CMD with administrator privileges
  3. Run this
    CD C:\RSAT expand -f:* "C:\RSAT\amd64fre_GRMRSATX_MSU.msu" C:\RSAT 
  4. Open and edit C:\RSAT>Windows6.1-KB958830-x64.xml
  5. Run this

    MD EXPAND expand -f:* "C:\RSAT\Windows6.1-KB958830-x64.cab" "C:\RSAT\expand" CD EXPAND DISM.exe /Online /NoRestart /Add-Package /PackagePath:"microsoft-windows-remoteserveradministrationtools-package~31bf3856ad364e35~amd64~~6.1.7600.16385.mum" /PackagePath:"microsoft-windows-remoteserveradministrationtools-package~31bf3856ad364e35~amd64~en-us~6.1.7600.16385.mum" /PackagePath:"microsoft-windows-remoteserveradministrationtools-package-minilp~31bf3856ad364e35~amd64~en-us~6.1.7600.16385.mum" /PackagePath:"microsoft-windows-remoteserveradministrationtools-package~31bf3856ad364e35~amd64~~6.1.7600.16385.mum"

Credit goes to http://sharepoint.tejic.com/?p=185

Configuring Windows Time for Active Directory

Special Thanks to tigermatt

I’ve had a few requests recently from people who were confused regarding how to configure time in their Active Directory domains – and some were playing with settings on servers and workstations to try to make things work. In this article, I’ll briefly explain how the time service works in Active Directory networks and general information on how you should go about configuring it.

For anyone not aware, all machines in an Active Directory environment automaticallyfind a time server to sync time with. Workstations use their authenticating Domain Controller, and the DCs sync with the server holding the PDC Emulator FSMO role. In a multi-domain forest, the PDC Emulator in each child domain synchronises with a DC or the PDCe in the forest root domain. To ensure the time remains reliable across the forest, only the PDC Emulator in the forest root domain should ever sync with an external time source – this leads to only one source of time being used across the forest. The Windows Time Service blog have a great post entitled Keeping the domain on time which explains this in more detail, including a great graphic.

The Windows Time Settings

You can find the settings for the Time Service in the registry, underHKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters. The most important value to note is the ‘Type’ string – on any domain machine other than the PDC Emulator in the forest root, this should be set to NT5DS. That name isn’t particularly descriptive; if it is set, it means the machine is finding a time server in the Active Directory hierarchy.

If it isn’t set to that, you should think about resetting the time service on that machine. To do that, run a Command Prompt as an Administrator and execute the following commands:

net stop w32time
w32tm /unregister
w32tm /register
net start w32time

Check the registry again, and the Type should now be in domain sync mode (NT5DS).

Sometimes, you may find an NTPServer key in the registry despite the Type being set to NT5DS. NT5DS doesn’t use an NTP Server, so what gives? This setting is simply left over from prior to the machine being joined to the domain, when it was in a workgroup. Provided the Type value is set correctly, the NTPServer entry can be completely ignored or even deleted. Running the above commands on a domain-joined machine will delete it automatically.

The Group Policy Settings

There are also a number of Group Policy settings for the time service. These can be found in Computer Configuration\Administrative Templates\System\Windows Time Service.

I do not encourage you to change these settings; if you have done so, you probably want to revert the policies to ‘Not Configured’. There are reasons why you may make the odd change, but in general, no changes are required and you can actually break the time sync if you do make them.

If you are interested in reading further about what they do, the Windows Time Service blog has another great page going through them: Group Policy Settings Explained.

The Forest Root PDC Emulator Settings

After a bit of a configuration reset, all your DCs, member servers and workstations should now be set to sync from the domain hierarchy. But what about the PDC Emulator in the forest root?

The fact of the matter is the PDCe doesn’t actually need to synchronise with anything. It automatically designates itself the most reliable time server in the domain and it can run quite happily like that, without ever talking to an external time server. My earlier blog post entitled Time: Reliable or accurate? describes why.

However, to have an easy life and keep your users from complaining, it is almost always a good idea to have some form of external time sync on the forest root PDC Emulator. There are a number of ways to do this – for example, an external hardware clock which syncs with GPS. However, the most common (and cheapest – free) solution is synchronising with another NTP server on the Internet. I often use the servers closest to me which participate in possibly the largest time service, the NTP Project (list of time servers). Be aware that if you are bound by SLAs (my company certainly is), by its very nature, the NTP project most probably isn’t the resource for you.

To configure the time sync on the PDCe, you need to execute the following commands. I’d strongly suggest you get a level playing field by resetting the time service using the instructions above before you start.

w32tm /config /manualpeerlist:”uk.pool.ntp.org,0×8 europe.pool.ntp.org,0×8″ /syncfromflags:MANUAL /reliable:yes /update

What’s that command doing?

That command is a rather hefty command, so you may like to know exactly what it is doing to your server. All the changes are taking place in the registry at the key I posted above; using the w32tm tool to make the configuration changes is simply much easier than doing it manually yourself.

/config causes the tool to enter configuration mode. There are a number of other modes it supports which you can find by running w32tm /?.

/manualpeerlist allows you to specify the NTP server or servers you wish to synchronise time with. In this instance, each server’s DNS name or IP address should have a comma followed by the string 0×8. This instructs Windows to send requests to this external server in client mode. If you enter multiple servers, which I suggest, put the servers in quotation marks and separate each entry with a space. The value you specify here is written back to the NTPServer value in the time service’s registry key.

/syncfromflags tells the time service where it should sync time from. You can specify two entries for this – either DOMHIER or MANUAL. The former causes the time service to synchronise with the Domain Hierarchy (sets NT5DS in the Type key in the registry) whereas the latter tells the time service to sync with the server(s) you specified in the Manual Peer List. MANUAL sets Type to NTP.

/reliable sets the server to be a reliable source of time for the domain. Strictly it isn’t required, because the PDC Emulator in the forest root is always the most reliable time server, but I like to include it anyway.

Finally, /update notifies the time service the values have changed, so the new settings are used with immediate effect. If this isn’t included, the registry is updated but the new values will only be used by the time service when its service or the server itself is restarted.

After you’ve run that command, you might want to take a look in the registry to see what changes have been made, and whether they are as you expected.

Check Time Synchronisation

You may be intrigued to know whether the time sync is working correctly. You can do this in one of two ways.

The safest is to wait for a scheduled time sync to take place, or restart the machine. Either will trigger Event ID 35 to be logged in the System log. This event’s description shows the time server the machine is synchronising with. This will be logged on both the PDC Emulator and all DCs, member servers and workstations. You can check for this on member machines to ensure a DC in the domain hierarchy is being found and used correctly – and to ensure your custom NTP servers configured on the PDC Emulator are being used as intended.

Alternatively, putting your cowboy hat on, you can force a time synchronisation. Set the time a minute or two out from what it should be, then return to the command prompt and run w32tm /resync /rediscover. After a few moments, the above event should be logged, and a healthy time service should cause the time on the system to be set back to normal.

As a note, no time synchronisation will take place if the difference between the current system time and the new time provided by the time server is too great. A minute or two is fine, but I would not set the difference to be any more than that. The system checks this difference at each sync, and will reject the new time provided by the time server if it is too large.


You should now have an understanding of how the time service works and where it stores its settings in the registry. While time isn’t one of the most fun services an Active Directory administrator will work with, it is important you ensure the forest stays in sync if you want to avoid major problems with time skew, Kerberos and Active Directory in general.

Friday, April 15, 2011

LizaMoon Hack: Mass SQL Injection Source(stopthehacker.com)


SQL injection is a technique used by malicious hackers and security researchers to inject code into a website. This mechanism exploits the improper use of input by web sites, such as the use of raw input from forms, and direct database queries using this information.

SQL Injection continues to be a major security vulnerability. Malicious hackers can exploit SQL injection vulnerabilities to insert malware onto websites without the knowledge of the website owner.

LizaMoon Mass SQL Injection
Recently, Websense published a report detailing LizaMoon – what they deem to be one of the most widespread SQL injection attacks.

This attack primarily injects the following piece of code:


This link loads a fake AV page:


What Links are Injected?
We appreciate the information that Websense researchers have shared so far. Perhaps we can add a little more detail to this information.

The SQL injection attacks that we observe on a daily basis from the corpus of almost 200,000 samples of web malware. These attacks can be observed on websites everyday. They are not restricted to injecting just one malicious link inside benign web pages.

For more information take a look at our post about how hackers can inject multiple links to compromised sites via SQL injection of benign sites.

In this case the following link was not injected alone:


The following links were also injected:


Who owns these malicious sites?
Most of the web sites seem to be registered to the following entity.

Registrant Contact:

1Vasea Petrovich ()
5Moscow, 76549

Administrative Contact:

1Vasea Petrovich (tik0066@gmail.com)
5Moscow, 76549

Technical Contact:

1Vasea Petrovich (tik0066@gmail.com)
5Moscow, 76549

How Do I Protect My Site?
Webmasters and administrators should search for instances of each malicious link in their sites to ensure that they remove all occurrences of the injected links. More importantly, it is critical to identify the cause of the SQL injection that allowed the site to be compromised.